您当前所在位置: 首页 > 学者
在线提示

恭喜!关注成功

在线提示

确认取消关注该学者?

邀请同行关闭

只需输入对方姓名和电子邮箱,就可以邀请你的同行加入中国科技论文在线。

真实姓名:

电子邮件:

尊敬的

我诚挚的邀请你加入中国科技论文在线,点击

链接,进入网站进行注册。

添加个性化留言

已为您找到该学者9条结果 成果回收站

上传时间

2009年05月31日

【期刊论文】L1/2 Regularizer

徐宗本, Zongben Xu& Hai Zhang, †& Yao Wang & Xiangyu Chang

Sci China Ser F-Inf Sci, Jan. 2009, vol. 52. no.1, 1-9,-0001,():

-1年11月30日

摘要

In this paper we propose an L1/2 regularizer which has a nonconvex penalty. The L1/2 regularizer is shown to have many promising properties such as unbiasedness, sparsity and Oracle properties. A reweighed iterative algorithm is proposed so that the solution of the L1/2 regularizer can be solved through transforming it into the solution of a series of L1 regularizers. The solution of the L1/2 regularizer is more sparse than that of the L1 regularizer, while solving the L1/2 regularizer ismuch simpler than solving the L0 regularizer. The experiments show that the L1/2 regularizer is very useful and efficient, and can be taken as a representative of the Lp(0<p<1) regularizer.

machine learning,, variable selection,, regularizer,, compressed sensing.,

上传时间

2009年05月31日

【期刊论文】The generalization performance of ERM algorithm with strongly mixing observations

徐宗本, Bin Zou, Luoqing Li, Zongben Xu

Published online: 07 February 2009,-0001,():

-1年11月30日

摘要

The generalization performance is the main concern of machine learning theoretical research. The previous main bounds describing the generalization ability of the Empirical Risk Minimization (ERM) algorithm are based on independent and identically distributed (i.i.d.) samples. In order to study the generalization performance of the ERM algorithm with dependent observations, we first establish the exponential bound on the rate of relative uniform convergence of the ERM algorithm with exponentially strongly mixing observations, and then we obtain the generalization bounds and prove that the ERM algorithm with exponentially strongly mixing observations is consistent. The main results obtained in this paper not only extend the previously known results for i.i.d. observations to the case of exponentially strongly mixing observations, but also improve the previous results for strongly mixing samples. Because the ERM algorithm is usually very time-consuming and overfitting may happen when the complexity of the hypothesis space is high, as an application of our main results we also explore a new strategy to implement the ERM algorithm in high complexity hypothesis space.

Generalization performance, ERM principle, Relative uniform convergence, Exponentially strongly mixing

上传时间

2009年05月31日

【期刊论文】The essential order of approximation for nearly exponential type neural networks

徐宗本, XU Zongben & WANG Jianjun

Science in China Series F: Information Sciences 2006 Vol. 49 No.4 446-460,-0001,():

-1年11月30日

摘要

For the nearly exponential type of feedforward neural networks (neFNNs), it is revealed the essential order of their approximation. It is proven that for any continuous function defined on a compact set of Rd, there exists a three-layer neFNNs with fixed number of hidden neurons that attain the essential order. When the function to be approximated belongs to the α-Lipschitz family (0<α≤2), the essential order of approximation is shown to be O(n-α) where n is any integer not less than the reciprocal of the predetermined approximation error. The upper bound and lower bound estimations on approximation precision of the neFNNs are provided. The obtained results not only characterize the intrinsic property of approximation of the neFNNs, but also uncover the implicit relationship between the precision (speed) and the number of hidden neurons of the neFNNs.

nearly exponential type neural networks,, the essential order of approximation,, the modulus of smoothness of a multivariate function.,

上传时间

2009年05月31日

【期刊论文】A comparative study of two modeling approaches in neural networks

徐宗本, Zong-Ben Xua, Hong Qiaob, Jigen Penga, Bo Zhangc, *

Neural Networks 17(2004)73-85,-0001,():

-1年11月30日

摘要

The neuron state modeling and the local field modeling provides two fundamental modeling approaches to neural network research, based on which a neural network system can be called either as a static neural network model or as a local field neural network model. These two models are theoretically compared in terms of their trajectory transformation property, equilibrium correspondence property, nontrivial attractive manifold property, global convergence as well as stability in many different senses. The comparison reveals an important stability invariance property of the two models in the sense that the stability (in any sense) of the static model is equivalent to that of a subsystem deduced from the local field model when restricted to a specific manifold. Such stability invariance property lays a sound theoretical foundation of validity of a useful, cross-fertilization type stability analysis methodology for various neural network models.

Static neural network modeling, Local field neural network modeling, Recurrent neural networks, Stability analysis, Asymptotic stability, Exponential stability, Global convergence, Globally attractive

上传时间

2009年05月31日

【期刊论文】A New Model of Simulated Evolutionary Computation-Convergence Analysis and Specifications

徐宗本, Kwong-Sak Leung, Senior Member, IEEE, Qi-Hong Duan, Zong-Ben Xu, and C. K. Wong, Fellow

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 5, NO.1, FEBRUARY 2001,-0001,():

-1年11月30日

摘要

There have been various algorithms designed for simulating natural evolution. This paper proposes a new simulated evolutionary computation model called the abstract evolutionary algorithm (AEA), which unifies most of the currently known evolutionary algorithms and describes the evolution as an abstract stochastic process composed of two fundamental operators: selection and evolution operators. By axiomatically characterizing the properties of the fundamental selection and evolution operators, several general convergence theorems and convergence rate estimations for the AEA are established. The established theorems are applied to a series of known evolutionary algorithms, directly yielding new convergence conditions and convergence rate estimations of various specific genetic algorithms and evolutionary strategies. The present work provides a significant step toward the establishment of a unified theory of simulated evolutionary computation.

Aggregating and scattering rate,, evolutionary strategy,, genetic algorithm,, selection intensity,, selection pressure,, stochastic process.,

合作学者

  • 徐宗本 邀请

    西安交通大学,陕西

    尚未开通主页