张海樟
博士 教授 博士生导师
中山大学 数学学院(珠海)
应用调和分析、学习理论、函数逼近
个性化签名
- 姓名:张海樟
- 目前身份:在职研究人员
- 担任导师情况:博士生导师
- 学位:博士
-
学术头衔:
博士生导师
- 职称:高级-教授
-
学科领域:
数学分析
- 研究兴趣:应用调和分析、学习理论、函数逼近
张海樟 中山大学数学学院(珠海)教授、博士生导师
教育背景:
2006/08-2009/05,美国Syracuse大学, 数学系, 博士
2003/09-2006/07,中国科学院数学与系统科学研究院,数学所, 硕士
1999/09-2003/07,北京师范大学, 数学系, 学士
工作经历:
2010/06-至今,中山大学,教授、博导
2009/06-2010/05, 美国密西根大学,博士后
2015/11-2016/07,香港科技大学,访问学者
研究领域:
应用调和分析、学习理论、函数逼近
主讲课程:
数学分析、傅里叶分析
科研项目:
1、国家自然科学基金面上项目,11971490 , 学习理论的前沿数学问题及应用,2020-2023,主持
2、国家自然科学基金面上项目,11571377 , 再生核的若干关键数学问题及其在机器学习中的应用,2016-2019,主持
3、国家自然科学基金优秀青年基金,11222103 , 应用与计算调和分析,2013-2015,主持
4、国家自然科学基金青年基金,11101438 , 机器学习中的稀疏逼近与巴拿赫空间方法,2012-2014,主持
5、广东省自然科学基金自由申请项目,2018A030313841 , 机器学习核方法的理论及应用,2018-2021,主持
-
主页访问
232
-
关注数
0
-
成果阅读
1015
-
成果数
13
Journal of Approximation Theory,2019,245():73-82
2019年09月01日
Fast reconstruction of a bandlimited function from its finite oversampling data has been a fundamental problem in sampling theory. As the number of sample data increases to infinity, exponentially-decaying reconstruction errors can be achieved by many methods in the literature. In fact, it is generally conjectured that when the optimal method is used, the dominant term in the error of reconstructing a function bandlimited to () from its data sampled at the integer points on is . By far, the best estimate for the constant among regularization methods is and is achieved by the highly efficient Gaussian regularized Whittaker–Kotelnikov–Shannon sampling series. We prove in this paper that the exponential constant is optimal for this method. Moreover, the optimal variance of the Gaussian regularizer is provided.
Bandlimited functions The Paley–Wiener space Sampling theorems Gaussian regularization Error bounds
0
-
115浏览
-
0点赞
-
0收藏
-
0分享
-
0下载
-
0评论
-
引用
【期刊论文】Statistical margin error bounds for L1-norm support vector machines
Neurocomputing,2019,339():210-216
2019年04月28日
Comparing with Lp-norm () Support Vector Machines (SVMs), the L1-norm SVM enjoys the nice property of simultaneously performing classification and feature selection. Margin error bounds for SVM on Hilbert spaces (or on more general q-uniformly smooth Banach spaces) have been obtained in the literature to justify the strategy of maximizing the margin in SVM. In this paper, we devote to estimating the margin error bound for L1-norm SVM methods and giving a geometrical interpretation for the result. We show that the fat-shattering dimension of the Banach spaces ℓ1 and ℓ∞ are both infinite. Therefore, we establish margin error bounds for the SVM on finite dimensional spaces with L1-norm, thus supplying statistical justification for the large margin classification of L1-norm SVM on finite dimensional spaces. To complete the theory, corresponding results for the L∞-norm SVM are also presented.
Margin error bounds L1-norm support vector machines Geometrical interpretation The fat-shattering dimension The classification hyperplane
0
-
51浏览
-
0点赞
-
0收藏
-
0分享
-
0下载
-
0评论
-
引用
【期刊论文】Margin Error Bounds for Support Vector Machines on Reproducing Kernel Banach Spaces
Neural Computation,2017,29(11):3078–3093&
2017年11月01日
Support vector machines, which maximize the margin from patterns to the separation hyperplane subject to correct classification, have received remarkable success in machine learning. Margin error bounds based on Hilbert spaces have been introduced in the literature to justify the strategy of maximizing the margin in SVM. Recently, there has been much interest in developing Banach space methods for machine learning. Large margin classification in Banach spaces is a focus of such attempts. In this letter we establish a margin error bound for the SVM on reproducing kernel Banach spaces, thus supplying statistical justification for large-margin classification in Banach spaces.
无
0
-
119浏览
-
0点赞
-
0收藏
-
0分享
-
0下载
-
0评论
-
引用
【期刊论文】Optimal sampling points in reproducing kernel Hilbert spaces
Journal of Complexity,2016,34():129-151
2016年06月01日
The recent development of compressed sensing seeks to extract information from as few samples as possible. In such applications, since the number of samples is restricted, one should deploy the sampling points wisely. We are motivated to study the optimal distribution of finite sampling points in reproducing kernel Hilbert spaces, the natural background function spaces for sampling. Formulation under the framework of optimal reconstruction yields a minimization problem. In the discrete measure case, we estimate the distance between the optimal subspace resulting from a general Karhunen–Loève transform and the kernel space to obtain another algorithm that is computationally favorable. Numerical experiments are then presented to illustrate the effectiveness of the algorithms for the searching of optimal sampling points.
Sampling points Optimal distribution Reproducing kernels The Karhunen–Loève transform
0
-
70浏览
-
0点赞
-
0收藏
-
0分享
-
0下载
-
0评论
-
引用
【期刊论文】Existence of the Bedrosian identity for Fourier multiplier operators
Forum Mathematicum,-0001,28(4):749-759
-1年11月30日
The Hilbert transformHsatisfies the Bedrosian identityH(fg)=$=$fHgwhenever the supports of the Fourier transforms off,g∈$\in$L2$L^{2}$(ℝ$\mathbb{R}$) are respectively contained inA=$=$[-a,b] andB=$=$ℝ$\mathbb{R}$∖$\setminus$(-b,a), where0≤$\leq$a,b≤$\leq$+∞$\infty$. Attracted by this interesting result arising from the time-frequency analysis, we investigate the existence of such an identity for a general bounded Fourier multiplier operator onL2$L^{2}$(ℝd$\mathbb{R}^{d}$) and for general support setsAandB. A geometric characterization of the support sets for the existence of the Bedrosian identity is established. Moreover, the support sets for the partial Hilbert transforms are all found. In particular, for the Hilbert transform to satisfy the Bedrosian identity, the support sets must be given as above.
无
0
-
92浏览
-
0点赞
-
0收藏
-
0分享
-
0下载
-
0评论
-
引用
【期刊论文】Reproducing kernel Banach spaces with the norm
Applied and Computational Harmonic Analysis,2013,34(1):96-116
2013年01月01日
Targeting at sparse learning, we construct Banach spaces of functions on an input space X with the following properties: (1) possesses an norm in the sense that is isometrically isomorphic to the Banach space of integrable functions on X with respect to the counting measure; (2) point evaluations are continuous linear functionals on and are representable through a bilinear form with a kernel function; and (3) regularized learning schemes on satisfy the linear representer theorem. Examples of kernel functions admissible for the construction of such spaces are given.
Reproducing kernel Banach spaces Sparse learning Lasso Basis pursuit Regularization The representer theorem The Brownian bridge kernel The exponential kernel
0
-
67浏览
-
0点赞
-
0收藏
-
0分享
-
0下载
-
0评论
-
引用
【期刊论文】Vector-valued reproducing kernel Banach spaces with applications to multi-task learning
Journal of Complexity,2013,29(2):195-215
2013年04月01日
Motivated by multi-task machine learning with Banach spaces, we propose the notion of vector-valued reproducing kernel Banach spaces (RKBSs). Basic properties of the spaces and the associated reproducing kernels are investigated. We also present feature map constructions and several concrete examples of vector-valued RKBSs. The theory is then applied to multi-task machine learning. Especially, the representer theorem and characterization equations for the minimizer of regularized learning schemes in vector-valued RKBSs are established.
Vector-valued reproducing kernel Banach spaces Feature maps Regularized learning The representer theorem Characterization equations
0
-
58浏览
-
0点赞
-
0收藏
-
0分享
-
0下载
-
0评论
-
引用
【期刊论文】Refinement of Operator-valued Reproducing Kernels
Journal of Machine Learning Research,-0001,13(4):91−136
-1年11月30日
This paper studies the construction of a refinement kernel for a given operator-valued reproducing kernel such that the vector-valued reproducing kernel Hilbert space of the refinement kernel contains that of the given kernel as a subspace. The study is motivated from the need of updating the current operator-valued reproducing kernel in multi-task learning when underfitting or overfitting occurs. Numerical simulations confirm that the established refinement kernel method is able to meet this need. Various characterizations are provided based on feature maps and vector-valued integral representations of operator-valued reproducing kernels. Concrete examples of refining translation invariant and finite Hilbert-Schmidt operator-valued reproducing kernels are provided. Other examples include refinement of Hessian of scalar-valued translation-invariant kernels and transformation kernels. Existence and properties of operator-valued reproducing kernels preserved during the refinement process are also investigated.
无
0
-
73浏览
-
0点赞
-
0收藏
-
0分享
-
0下载
-
0评论
-
引用
【期刊论文】Frames, Riesz bases, and sampling expansions in Banach spaces via semi-inner products
Applied and Computational Harmonic Analysis,2011,31(1):1-25
2011年07月01日
Frames in a Banach space were defined as a sequence in its dual space ⁎ in some recent references. We propose to define them as a collection of elements in by making use of semi-inner products. Classical theory on frames and Riesz bases is generalized under this new perspective. We then aim at establishing the Shannon sampling theorem in Banach spaces. The existence of such expansions in translation invariant reproducing kernel Hilbert and Banach spaces is discussed.
Frames Riesz bases Bessel sequences Riesz–Fischer sequences Banach spaces Semi-inner products Duality mappings Shannonʼs sampling expansions Reproducing kernel Banach spaces Reproducing kernel Hilbert spaces Gaussian kernels
0
-
74浏览
-
0点赞
-
0收藏
-
0分享
-
0下载
-
0评论
-
引用
【期刊论文】Reproducing Kernel Banach Spaces for Machine Learning
The Journal of Machine Learning Research,-0001,10():2741-2775
-1年11月30日
We introduce the notion of reproducing kernel Banach spaces (RKBS) and study special semi-inner-product RKBS by making use of semi-inner-products and the duality mapping. Properties of an RKBS and its reproducing kernel are investigated. As applications, we develop in the framework of RKBS standard learning schemes including minimal norm interpolation, regularization network, support vector machines, and kernel principal component analysis. In particular, existence, uniqueness and representer theorems are established.
无
0
-
94浏览
-
0点赞
-
0收藏
-
0分享
-
0下载
-
0评论
-
引用