基于排队模型和强化学习的动态云任务调度算法
首发时间:2019-01-30
摘要:作为云计算的核心问题之一,如何有效地管理和调度云计算资源是一个极具挑战性的研究课题。在异构云环境中,为了提高任务调度效率,最小化任务响应时间,降低能耗,本文提出了一种基于排队模型和强化学习的动态云任务调度算法QTPRL(Dynamic Cloud Task Scheduling Algorithm Based on Queue Theory and Pre-processed Reinforcement Learning)。该算法首先基于 M/M/S 排队模型对云任务调度进行建模,采用单队列多资源池的设计思路,将任务分发到空闲的资源池(物理机)去执行,减少了任务等待队列的长度,从而减少了任务的等待时间。在此基础上,将任务长度、任务的截止时间和等待时间相结合,构建动态优先级,对任务进行预处理,并使用强化学习的方法将任务分配到不同物理机空闲的虚拟机上,完成任务调度。实验结果表明,与传统的云任务调度方法相比,本文提出的算法可以有效减少任务的响应时间,提高资源利用率,并降低系统能耗。
For information in English, please click here
A Dynamic Cloud Task Scheduling Based on Queue Queue and Pre-processed Reinforcement Learning
Abstract:As one of the key issues in cloud computing, it is an extremely challenging research topic to manage and schedule cloud resources. To achieve lower task response time and higher resource utilization in heterogeneous cloud environment, a dynamic cloud task scheduling algorithm based on Queue Theory and Pre-processed Reinforcement Learning(QTPRL) is proposed in this paper to make task scheduling more energy efficient. Firstly, this method builds task scheduling model based on M/M/S queuing model which adopts the design of a single queue and multi-resource pool to submit tasks to the idle resource pool (physical machine) un-uniformly, reducing the length of waiting queue and the waiting time of tasks. On this basis, a dynamic priority is conducted in the related queues by combining the length, deadline and waiting time of tasks to complete pre-processing. Finally, a kind of reinforcement learning method is used to assign tasks to an idle virtual machine on different physical machines. Simulation experiment results show that our scheduling scheme can outperform on the average response time, resource utilization, and the energy consumption compared to most of traditional methods.
Keywords: cloud computing task scheduling queuing model reinforcement learning energy saving
引用
No.****
动态公开评议
共计0人参与
勘误表
基于排队模型和强化学习的动态云任务调度算法
评论
全部评论0/1000