MTNAS:基于深度强化学习的多任务神经架构搜索
首发时间:2023-03-10
摘要:传统神经架构搜索的目标是自动搜索单个性能最佳的架构,其仅能优化单个学习目标,如准确性。但在真实世界的场景中,人们必须考虑多个目标,如计算代价和由不同的计算资源所产生的多种约束。现有的方法通常采用优化一个预定义的效用函数来解决多目标神经架构搜索问题,但无法解决具有不同延迟预算和不同数据的多任务场景。因此,本文提出了一个多任务神经架构搜索框架(Multi-Task Neural Architecture Search,MTNAS),首先将多任务架构搜索问题转换为约束优化问题并建模为马尔可夫决策过程,然后提出一个基于重要性加权参与者-学习者架构(IMPortance weighted Actor-Learner Architecture,IMPALA)的策略优化算法去学习最优策略。在CIFAR-10/100数据集上进行验证,结果表明所提出的搜索框架具备解决多任务神经架构搜索的能力,并且搜索效率比现有方法提高了4.25~10.78倍。
For information in English, please click here
MTNAS:Multi-task Neural Architecture Search based on Deep Reinforcement Learning
Abstract:The goal of conventional neural architecture search is to automatically search a single architecture with the best performance, which can only optimize a single learning goal, such as accuracy. However, in real world scenarios, people must consider multiple objectives, such as computing costs and constraints generated by different computing resources. The existing methods usually solve the multi-objective neural architecture search problem by optimizing a predefined utility function, but they cannot solve the multi-task scenarios with different latency budgets and different data. Therefore, we propose a Multi-Task Neural Architecture Search (MTNAS). First, the multi-task architecture search problem is converted into a constraint optimization problem and modeled as a Markov Decision Process. Then, a strategy optimization algorithm based on the IMPortance weighted Actor-Learner Architecture (IMPALA) is proposed to learn the optimal strategy. The results of validation on the CIFAR-10/100 dataset show that the proposed search framework has the ability to solve the multi-task neural architecture search, and the search efficiency is 4.25~10.78 higher than the existing methods.
Keywords: Deep learning Neural architecture search Reinforcement learning Weight Sharing
基金:
引用
No.****
动态公开评议
共计0人参与
勘误表
MTNAS:基于深度强化学习的多任务神经架构搜索
评论
全部评论0/1000