基于深度强化学习的动态计算卸载
首发时间:2019-03-28
摘要:移动边缘计算在移动网络边缘提供计算资源。为了减少执行时延,计算密集型任务可以从用户设备卸载到移动边缘服务器。当考虑到任务动态到达情况下,如何分配计算与无线网络资源是保障服务质量的关键问题之一。为了最小化多用户任务执行时延,本文联合优化计算卸载决策与无线资源、计算资源分配,提出一种基于深度强化学习的深度策略梯度算法。仿真结果表明提出的算法在不同的用户数、计算能力以及无线信道带宽下均能获得更低的任务执行总时延。
For information in English, please click here
Dynamic Computation Offloading Based on Deep Reinforcement Learning
Abstract:Mobile edge computing (MEC) provides computation capability at the edge of wireless network. To reduce the execution delay, computation-intensive multimedia tasks can be offloaded from user equipments (UEs) to the MEC server. How to allocate the computational and wireless resources is one of the key issues to guarantee the quality of services, and is very challenging when tasks are generated dynamically. To minimize the sum execution delay of multiple users, this paper jointly optimizes the offloading decision and the allocation of both computational and wireless resources. A deep policy gradient (DPG) algorithm based on the deep reinforcement learning is proposed on this paper. Simulation results show that our proposed DPG method can achieve lower latency than the baselines under different numbers of users, computation capacity and wireless bandwidth.
Keywords: wireless communication computation offloading mobile edge computation reinforcemnet learning
基金:
引用
No.****
同行评议
勘误表
基于深度强化学习的动态计算卸载
评论
全部评论0/1000