您当前所在位置: 首页 > 学者
在线提示

恭喜!关注成功

在线提示

确认取消关注该学者?

邀请同行关闭

只需输入对方姓名和电子邮箱,就可以邀请你的同行加入中国科技论文在线。

真实姓名:

电子邮件:

尊敬的

我诚挚的邀请你加入中国科技论文在线,点击

链接,进入网站进行注册。

添加个性化留言

已为您找到该学者26条结果 成果回收站

上传时间

2020年11月30日

【期刊论文】Improve Multi-Instance Neural Networks through Feature Selection

Neural Processing Letters,2004,19():1–10

2004年02月01日

摘要

Multi-instance learning is regarded as a new learning framework where the training examples are bags composed of instances without labels, and the task is to predict the labels of unseen bags through analyzing the training bags with known labels. Recently, a multi-instance neural network BP-MIP was proposed. In this paper, BP-MIP is improved through adopting two different feature selection techniques, i.e. feature scaling with Diverse Density and feature reduction with principal component analysis. In detail, before feature vectors are fed to a BP-MIP neural network, they are scaled by the feature weights found by running Diverse Density on the training data, or projected by a linear transformation matrix formed by principal component analysis. Experiments show that these feature selection mechanisms can significantly improve the performance of BP-MIP.

0

上传时间

2020年11月30日

【期刊论文】Adapting RBF neural networks to multi-instance learning

Neural Processing Letters,2006,23():1–26

2006年02月01日

摘要

In multi-instance learning, the training examples are bags composed of instances without labels, and the task is to predict the labels of unseen bags through analyzing the training bags with known labels. A bag is positive if it contains at least one positive instance, while it is negative if it contains no positive instance. In this paper, a neural network based multi-instance learning algorithm named RBF-MIP is presented, which is derived from the popular radial basis function (RBF) methods. Briefly, the first layer of an RBF-MIP neural network is composed of clusters of bags formed by merging training bags agglomeratively, where Hausdorff metric is utilized to measure distances between bags and between clusters. Weights of second layer of the RBF-MIP neural network are optimized by minimizing a sum-of-squares error function and worked out through singular value decomposition (SVD). Experiments on real-world multi-instance benchmark data, artificial multi-instance benchmark data and natural scene image database retrieval are carried out. The experimental results show that RBF-MIP is among the several best learning algorithms on multi-instance problems.

0

上传时间

2020年11月30日

【期刊论文】Multilabel Neural Networks with Applications to Functional Genomics and Text Categorization

IEEE Transactions on Knowledge and Data Engineering,2006,18(10):1338 - 135

2006年08月28日

摘要

In multilabel learning, each instance in the training set is associated with a set of labels and the task is to output a label set whose size is unknown a priori for each unseen instance. In this paper, this problem is addressed in the way that a neural network algorithm named BP-MLL, i.e., backpropagation for multilabel learning, is proposed. It is derived from the popular backpropagation algorithm through employing a novel error function capturing the characteristics of multilabel learning, i.e., the labels belonging to an instance should be ranked higher than those not belonging to that instance. Applications to two real-world multilabel learning problems, i.e., functional genomics and text categorization, show that the performance of BP-MLL is superior to that of some well-established multilabel learning algorithms

0

上传时间

2020年11月30日

【期刊论文】Solving multi-instance problems with classifier ensemble based on constructive clustering

Knowledge and Information Systems volume,2006,11():155–170

2006年08月10日

摘要

In multi-instance learning, the training set is composed of labeled bags each consists of many unlabeled instances, that is, an object is represented by a set of feature vectors instead of only one feature vector. Most current multi-instance learning algorithms work through adapting single-instance learning algorithms to the multi-instance representation, while this paper proposes a new solution which goes at an opposite way, that is, adapting the multi-instance representation to single-instance learning algorithms. In detail, the instances of all the bags are collected together and clustered into d groups first. Each bag is then re-represented by d binary features, where the value of the ith feature is set to one if the concerned bag has instances falling into the ith group and zero otherwise. Thus, each bag is represented by one feature vector so that single-instance classifiers can be used to distinguish different classes of bags. Through repeating the above process with different values of d, many classifiers can be generated and then they can be combined into an ensemble for prediction. Experiments show that the proposed method works well on standard as well as generalized multi-instance problems.

0

上传时间

2020年11月30日

【期刊论文】ML-KNN: A lazy learning approach to multi-label learning

Pattern Recognition,2007,40(7):2038-2048

2007年07月01日

摘要

Multi-label learning originated from the investigation of text categorization problem, where each document may belong to several predefined topics simultaneously. In multi-label learning, the training set is composed of instances each associated with a set of labels, and the task is to predict the label sets of unseen instances through analyzing training instances with known label sets. In this paper, a multi-label lazy learning approach named ML-KNN is presented, which is derived from the traditional K-nearest neighbor (KNN) algorithm. In detail, for each unseen instance, its K nearest neighbors in the training set are firstly identified. After that, based on statistical information gained from the label sets of these neighboring instances, i.e. the number of neighboring instances belonging to each possible class, maximum a posteriori (MAP) principle is utilized to determine the label set for the unseen instance. Experiments on three different real-world multi-label learning problems, i.e. Yeast gene functional analysis, natural scene classification and automatic web page categorization, show that ML-KNN achieves superior performance to some well-established multi-label learning algorithms.

Machine learning, Multi-label learning, Lazy learning, K-nearest neighbor, Functional genomics, Natural scene classification, Text categorization

0

合作学者

  • 暂无合作作者