您当前所在位置: 首页 > 学者
在线提示

恭喜!关注成功

在线提示

确认取消关注该学者?

邀请同行关闭

只需输入对方姓名和电子邮箱,就可以邀请你的同行加入中国科技论文在线。

真实姓名:

电子邮件:

尊敬的

我诚挚的邀请你加入中国科技论文在线,点击

链接,进入网站进行注册。

添加个性化留言

已为您找到该学者22条结果 成果回收站

上传时间

2020年10月30日

【期刊论文】Kernel Nearest-Neighbor Algorithm

Neural Processing Letters ,2002,15():147–156

2002年04月01日

摘要

The ‘kernel approach’ has attracted great attention with the development of support vector machine (SVM) and has been studied in a general way. It offers an alternative solution to increase the computational power of linear learning machines by mapping data into a high dimensional feature space. This ‘approach’ is extended to the well-known nearest-neighbor algorithm in this paper. It can be realized by substitution of a kernel distance metric for the original one in Hilbert space, and the corresponding algorithm is called kernel nearest-neighbor algorithm. Three data sets, an artificial data set, BUPA liver disorders database and USPS database, were used for testing. Kernel nearest-neighbor algorithm was compared with conventional nearest-neighbor algorithm and SVM Experiments show that kernel nearest-neighbor algorithm is more powerful than conventional nearest-neighbor algorithm, and it can compete with SVM.

0

上传时间

2020年10月30日

【期刊论文】Karyotyping of comparative genomic hybridization human metaphases using kernel nearest‐neighbor algorithm

Cytometry,2002,48(4):202-208

2002年07月26日

摘要

Background Comparative genomic hybridization (CGH) is a relatively new molecular cytogenetic method that detects chromosomal imbalances. Automatic karyotyping is an important step in CGH analysis because the precise position of the chromosome abnormality must be located and manual karyotyping is tedious and time‐consuming. In the past, computer‐aided karyotyping was done by using the 4′,6‐diamidino‐2‐phenylindole, dihydrochloride (DAPI)‐inverse images, which required complex image enhancement procedures. Methods An innovative method, kernel nearest‐neighbor (K‐NN) algorithm, is proposed to accomplish automatic karyotyping. The algorithm is an application of the “kernel approach,” which offers an alternative solution to linear learning machines by mapping data into a high dimensional feature space. By implicitly calculating Euclidean or Mahalanobis distance in a high dimensional image feature space, two kinds of K‐NN algorithms are obtained. New feature extraction methods concerning multicolor information in CGH images are used for the first time. Results Experiment results show that the feature extraction method of using multicolor information in CGH images improves greatly the classification success rate. A high success rate of about 91.5% has been achieved, which shows that the K‐NN classifier efficiently accomplishes automatic chromosome classification from relatively few samples. Conclusions The feature extraction method proposed here and K‐NN classifiers offer a promising computerized intelligent system for automatic karyotyping of CGH human chromosomes. Cytometry 48:202–208, 2002. © 2002 Wiley‐Liss, Inc.

0

上传时间

2020年10月30日

【期刊论文】Discriminative cluster adaptive training

IEEE Transactions on Audio, Speech, and Language Processing,2006,14(5):1694 - 170

2006年08月21日

摘要

Multiple-cluster schemes, such as cluster adaptive training (CAT) or eigenvoice systems, are a popular approach for rapid speaker and environment adaptation. Interpolation weights are used to transform a multiple-cluster, canonical, model to a standard hidden Markov model (HMM) set representative of an individual speaker or acoustic environment. Maximum likelihood training for CAT has previously been investigated. However, in state-of-the-art large vocabulary continuous speech recognition systems, discriminative training is commonly employed. This paper investigates applying discriminative training to multiple-cluster systems. In particular, minimum phone error (MPE) update formulae for CAT systems are derived. In order to use MPE in this case, modifications to the standard MPE smoothing function and the prior distribution associated with MPE training are required. A more complex adaptive training scheme combining both interpolation weights and linear transforms, a structured transform (ST), is also discussed within the MPE training framework. Discriminatively trained CAT and ST systems were evaluated on a state-of-the-art conversational telephone speech task. These multiple-cluster systems were found to outperform both standard and adaptively trained systems

0

上传时间

2020年10月30日

【期刊论文】Bayesian Adaptive Inference and Adaptive Training

IEEE Transactions on Audio, Speech, and Language Processing,2007,15(6):1932 - 194

2007年07月23日

摘要

Large-vocabulary speech recognition systems are often built using found data, such as broadcast news. In contrast to carefully collected data, found data normally contains multiple acoustic conditions, such as speaker or environmental noise. Adaptive training is a powerful approach to build systems on such data. Here, transforms are used to represent the different acoustic conditions, and then a canonical model is trained given this set of transforms. This paper describes a Bayesian framework for adaptive training and inference. This framework addresses some limitations of standard maximum-likelihood approaches. In contrast to the standard approach, the adaptively trained system can be directly used in unsupervised inference, rather than having to rely on initial hypotheses being present. In addition, for limited adaptation data, robust recognition performance can be obtained. The limited data problem often occurs in testing as there is no control over the amount of the adaptation data available. In contrast, for adaptive training, it is possible to control the system complexity to reflect the available data. Thus, the standard point estimates may be used. As the integral associated with Bayesian adaptive inference is intractable, various marginalization approximations are described, including a variational Bayes approximation. Both batch and incremental modes of adaptive inference are discussed. These approaches are applied to adaptive training of maximum-likelihood linear regression and evaluated on a large-vocabulary speech recognition task. Bayesian adaptive inference is shown to significantly outperform standard approaches.

0

上传时间

2020年10月30日

【期刊论文】Unsupervised Adaptation With Discriminative Mapping Transforms

IEEE Transactions on Audio, Speech, and Language Processing,2009,17(4):

2009年05月01日

摘要

The most commonly used approaches to speaker adaptation are based on linear transforms, as these can be robustly estimated using limited adaptation data. Although significant gains can be obtained using discriminative criteria for training acoustic models, maximum-likelihood (ML) estimated transforms are still used for unsupervised adaptation. This is because discriminatively trained transforms are highly sensitive to errors in the adaptation supervision hypothesis. This paper describes a new framework for estimating transforms that are discriminative in nature, but are less sensitive to this hypothesis issue. A speaker-independent discriminative mapping transformation (DMT) is estimated during training. This transform is obtained after a speaker-specific ML-estimated transform of each training speaker has been applied. During recognition an ML speaker-specific transform is found for each test-set speaker and the speaker-independent DMT then applied. This allows a transform which is discriminative in nature to be indirectly estimated, while only requiring an ML speaker-specific transform to be found during recognition. The DMT technique is evaluated on an English conversational telephone speech task. Experiments showed that using DMT in unsupervised adaptation led to significant gains over both standard ML and discriminatively trained transforms.

0

合作学者

  • 暂无合作作者