您当前所在位置: 首页 > 学者
在线提示

恭喜!关注成功

在线提示

确认取消关注该学者?

邀请同行关闭

只需输入对方姓名和电子邮箱,就可以邀请你的同行加入中国科技论文在线。

真实姓名:

电子邮件:

尊敬的

我诚挚的邀请你加入中国科技论文在线,点击

链接,进入网站进行注册。

添加个性化留言

已为您找到该学者13条结果 成果回收站

上传时间

2021年03月23日

【期刊论文】Sharp exponential bounds for the Gaussian regularized Whittaker–Kotelnikov–Shannon sampling series

Journal of Approximation Theory,2019,245():73-82

2019年09月01日

摘要

Fast reconstruction of a bandlimited function from its finite oversampling data has been a fundamental problem in sampling theory. As the number of sample data increases to infinity, exponentially-decaying reconstruction errors can be achieved by many methods in the literature. In fact, it is generally conjectured that when the optimal method is used, the dominant term in the error of reconstructing a function bandlimited to () from its data sampled at the integer points on is . By far, the best estimate for the constant among regularization methods is and is achieved by the highly efficient Gaussian regularized Whittaker–Kotelnikov–Shannon sampling series. We prove in this paper that the exponential constant is optimal for this method. Moreover, the optimal variance of the Gaussian regularizer is provided.

Bandlimited functions The Paley–Wiener space Sampling theorems Gaussian regularization Error bounds

0

上传时间

2021年03月23日

【期刊论文】Statistical margin error bounds for L1-norm support vector machines

Neurocomputing,2019,339():210-216

2019年04月28日

摘要

Comparing with Lp-norm () Support Vector Machines (SVMs), the L1-norm SVM enjoys the nice property of simultaneously performing classification and feature selection. Margin error bounds for SVM on Hilbert spaces (or on more general q-uniformly smooth Banach spaces) have been obtained in the literature to justify the strategy of maximizing the margin in SVM. In this paper, we devote to estimating the margin error bound for L1-norm SVM methods and giving a geometrical interpretation for the result. We show that the fat-shattering dimension of the Banach spaces ℓ1 and ℓ∞ are both infinite. Therefore, we establish margin error bounds for the SVM on finite dimensional spaces with L1-norm, thus supplying statistical justification for the large margin classification of L1-norm SVM on finite dimensional spaces. To complete the theory, corresponding results for the L∞-norm SVM are also presented.

Margin error bounds L1-norm support vector machines Geometrical interpretation The fat-shattering dimension The classification hyperplane

0

上传时间

2021年03月23日

【期刊论文】Margin Error Bounds for Support Vector Machines on Reproducing Kernel Banach Spaces

Neural Computation,2017,29(11):3078–3093&

2017年11月01日

摘要

Support vector machines, which maximize the margin from patterns to the separation hyperplane subject to correct classification, have received remarkable success in machine learning. Margin error bounds based on Hilbert spaces have been introduced in the literature to justify the strategy of maximizing the margin in SVM. Recently, there has been much interest in developing Banach space methods for machine learning. Large margin classification in Banach spaces is a focus of such attempts. In this letter we establish a margin error bound for the SVM on reproducing kernel Banach spaces, thus supplying statistical justification for large-margin classification in Banach spaces.

0

上传时间

2021年03月23日

【期刊论文】Optimal sampling points in reproducing kernel Hilbert spaces

Journal of Complexity,2016,34():129-151

2016年06月01日

摘要

The recent development of compressed sensing seeks to extract information from as few samples as possible. In such applications, since the number of samples is restricted, one should deploy the sampling points wisely. We are motivated to study the optimal distribution of finite sampling points in reproducing kernel Hilbert spaces, the natural background function spaces for sampling. Formulation under the framework of optimal reconstruction yields a minimization problem. In the discrete measure case, we estimate the distance between the optimal subspace resulting from a general Karhunen–Loève transform and the kernel space to obtain another algorithm that is computationally favorable. Numerical experiments are then presented to illustrate the effectiveness of the algorithms for the searching of optimal sampling points.

Sampling points Optimal distribution Reproducing kernels The Karhunen–Loève transform

0

上传时间

2021年03月23日

【期刊论文】Vector-valued reproducing kernel Banach spaces with applications to multi-task learning

Journal of Complexity,2013,29(2):195-215

2013年04月01日

摘要

Motivated by multi-task machine learning with Banach spaces, we propose the notion of vector-valued reproducing kernel Banach spaces (RKBSs). Basic properties of the spaces and the associated reproducing kernels are investigated. We also present feature map constructions and several concrete examples of vector-valued RKBSs. The theory is then applied to multi-task machine learning. Especially, the representer theorem and characterization equations for the minimizer of regularized learning schemes in vector-valued RKBSs are established.

Vector-valued reproducing kernel Banach spaces Feature maps Regularized learning The representer theorem Characterization equations

0

合作学者

  • 暂无合作作者