基于深度强化学习的小区中断补偿机制
首发时间:2020-04-27
摘要:为了满足多样化业务需求,无线网络结构日益复杂,网络参数急剧增多,运营商的运维压力与日俱增。小区中断补偿技术作为网络自组织的重要组成部分,旨在通过改变相邻小区参数进行覆盖或容量的补偿,以减少覆盖空洞和缓解容量下降,可以有效减少人力投入及运营成本。然而,传统小区中断补偿算法多为集中式架构,需获取全局网络信息,开销大,实现难度高。针对上述问题,在本论文中,提出了一种基于多智能体深度强化学习(Deep Reinforcement Learning, DRL)的中断补偿算法。基于与网络环境的交互,每个补偿基站利用一个 DRL 模型根据小区局部状态进行天线下倾角和用户功率调整,同时根据小区的性能进行补偿策略的更新,最终实现仅利用局部信息的小区中断恢复和网络容量优化。通过与最优补偿方案、基于遗传算法的补偿方案等进行对比发现,所提分布式方法可有效恢复中断小区用户性能且达到接近最优的系统容量。
For information in English, please click here
Cell outage compensation mechanism based on deep reinforcement learning
Abstract:With the explosive growth of smart phones and differentiated traffic demands, enhancing network performance while reducing expenditure has put great pressure on mobile network operators in recent years. Cell Outage Compensation (COC), as an important part of Self-Organizing networks (SONs), aims to alleviate the impacts of coverage hole and capacity degradation by changing parameters of adjacent cells, which can effectively reduce operational expenditures. However, traditional approaches to cell outage compensation are usually centralized, which needs to acquire global network information. To overcome this issue, this paper proposes a COC method based on distributed deep reinforcement learning (DRL). Via the interaction with the network environment, each compensating base station uses a DRL model to independently adjust its antenna downtilt and user power according to local information and updates the adjustment strategy according to the resulted cell performance. Compared with a compensation scheme based on the genetic algorithm, it is found that the proposed distributed method can effectively recover the performance of the users in the outage cells and reach a near optimal system capacity.
Keywords: Self-organizing networks cell outage compensation deep reinforcement learning
基金:
引用
No.****
同行评议
勘误表
基于深度强化学习的小区中断补偿机制
评论
全部评论0/1000