您当前所在位置: 首页 > 学者

王文剑

  • 84浏览

  • 0点赞

  • 0收藏

  • 0分享

  • 100下载

  • 0评论

  • 引用

期刊论文

A heuristic training for support vector regression

王文剑Wenjian Wanga;∗ Zongben Xub

Neurocomputing 61(2004)259-275,-0001,():

URL:

摘要/描述

A heuristic method for accelerating support vector machine (SVM) training based on a measurement of similarity among samples is presented in this paper. To train SVM, a quadratic function with linear constraints is optimized. The original formulation of the objective function of an SVM is e6cient during optimization phase, but the yielded discriminant function often contains redundant terms. The economy of the discriminant function of an SVM is dependent on a sparse subset of the training data, say, selected support vectors by optimization techniques. The motivation for using a sparse controlled version of an SVM is therefore a practical issue since it is the requirement of decreasing computation expense during the SVM testing and enhancing the ability to interpretation of the model. Besides the existing approaches, an intuitive way to achieve this task is to control support vectors sparsely by reducing training data without discounting generalization performance. The most attractive feature of the idea is to make SVM training fast especially for training data of large size because the size of optimization problem can be decreased greatly. In this paper, a heuristic rule is utilized to reduce training data for support vector regression (SVR). At ;rst, all the training data are divided into several groups, and then for each group, some training vectors will be discarded based on the measurement of similarity among samples. The prior reduction process is carried out in the original data space before SVM training, so the extra computation expense may be rarely taken into account. Even considering the preprocessing cost, the total spending time is still less than that for training SVM with the complete training set. As a result, the number of vectors for SVR training becomes small and the training time can be decreased greatly without compromising the generalization capability of SVMs. Simulating results show the e=ectiveness of the presented method.

【免责声明】以下全部内容由[王文剑]上传于[2010年03月30日 19时02分05秒],版权归原创者所有。本文仅代表作者本人观点,与本网站无关。本网站对文中陈述、观点判断保持中立,不对所包含内容的准确性、可靠性或完整性提供任何明示或暗示的保证。请读者仅作参考,并请自行承担全部责任。

我要评论

全部评论 0

本学者其他成果

    同领域成果