您当前所在位置: 首页 > 学者
在线提示

恭喜!关注成功

在线提示

确认取消关注该学者?

邀请同行关闭

只需输入对方姓名和电子邮箱,就可以邀请你的同行加入中国科技论文在线。

真实姓名:

电子邮件:

尊敬的

我诚挚的邀请你加入中国科技论文在线,点击

链接,进入网站进行注册。

添加个性化留言

已为您找到该学者9条结果 成果回收站

上传时间

2009年05月31日

【期刊论文】A Decomposition Principle for Complexity Reduction of Artificial Neural Networks

徐宗本, ZONG-BEN XU AND CHUNG-PING KWONG

Neural Networks Vol. 9 No.6 pp. 999-1016, 1996,-0001,():

-1年11月30日

摘要

A decomposition principle is developed for systematic determination of the dimensionality and the connections of Hopfield-type associative memory networks. Given a set of high dimensional prototype vectors of given memory objects, we develop decomposition algorithms to extract a set o flower dimensional key features of the pattern vectors. Every key feature can be used to build an associative memory with the lowest complexity, and more than one key feature can be simultaneously used to build networks with higher recognition accuracy. In the latter case, we further propose a "decomposed neural network" based on a new encoding scheme to reduce the network complexity. In contrast to the original Hopfield network, the decomposed networks not only increase the network's storage capacity, but also reduce the network's connection complexity from quadratic to linear growth with the network dimension. Both theoretical analysis and simulation results demonstrate that the proposed principle is powerful.

Decomposition principle,, Hopfield-type networks,, Interpolation operator,, Best approximation projection,, Associative memories,, Elementary matrix transformation

上传时间

2009年05月31日

【期刊论文】Neural Networks for Convex Hull Computation

徐宗本, Yee Leung, Jiang-She Zhang, and Zong-Ben Xu

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 8, NO.3, MAY 1997,-0001,():

-1年11月30日

摘要

Computing convex hull is one of the central problems账in various applications of computational geometry. In this paper, a convex hull computing neural network (CHCNN) is developed to solve the related problems in the N-dimensional spaces. The algorithm is based on a two-layered neural network, topologically similar to ART, with a newly developed adaptive training strategy called excited learning. The CHCNN provides a parallel on-line and real-time processing of data which, after training, yields two closely related approximations, one from within and one from outside, of the desired convex hull. It is shown that accuracy of the approximate convex hulls obtained is around O[K-1=(N-1)], where K is the number of neurons in the output layer of the CHCNN. When K is taken to be sufficiently large, the CHCNN can generate any accurate approximate convex hull. We also show that an upper bound exists such that the CHCNN will yield the precise convex hull when K is larger than or equal to this bound. A series of simulations and applications is provided to demonstrate the feasibility, effectiveness, and high efficiency of the proposed algorithm.

ART-like neural network,, computational geometry,, convex hull computation,, excited learning.,

上传时间

2009年05月31日

【期刊论文】L1/2 Regularizer

徐宗本, Zongben Xu& Hai Zhang, †& Yao Wang & Xiangyu Chang

Sci China Ser F-Inf Sci, Jan. 2009, vol. 52. no.1, 1-9,-0001,():

-1年11月30日

摘要

In this paper we propose an L1/2 regularizer which has a nonconvex penalty. The L1/2 regularizer is shown to have many promising properties such as unbiasedness, sparsity and Oracle properties. A reweighed iterative algorithm is proposed so that the solution of the L1/2 regularizer can be solved through transforming it into the solution of a series of L1 regularizers. The solution of the L1/2 regularizer is more sparse than that of the L1 regularizer, while solving the L1/2 regularizer ismuch simpler than solving the L0 regularizer. The experiments show that the L1/2 regularizer is very useful and efficient, and can be taken as a representative of the Lp(0<p<1) regularizer.

machine learning,, variable selection,, regularizer,, compressed sensing.,

上传时间

2009年05月31日

【期刊论文】The generalization performance of ERM algorithm with strongly mixing observations

徐宗本, Bin Zou, Luoqing Li, Zongben Xu

Published online: 07 February 2009,-0001,():

-1年11月30日

摘要

The generalization performance is the main concern of machine learning theoretical research. The previous main bounds describing the generalization ability of the Empirical Risk Minimization (ERM) algorithm are based on independent and identically distributed (i.i.d.) samples. In order to study the generalization performance of the ERM algorithm with dependent observations, we first establish the exponential bound on the rate of relative uniform convergence of the ERM algorithm with exponentially strongly mixing observations, and then we obtain the generalization bounds and prove that the ERM algorithm with exponentially strongly mixing observations is consistent. The main results obtained in this paper not only extend the previously known results for i.i.d. observations to the case of exponentially strongly mixing observations, but also improve the previous results for strongly mixing samples. Because the ERM algorithm is usually very time-consuming and overfitting may happen when the complexity of the hypothesis space is high, as an application of our main results we also explore a new strategy to implement the ERM algorithm in high complexity hypothesis space.

Generalization performance, ERM principle, Relative uniform convergence, Exponentially strongly mixing

合作学者

  • 徐宗本 邀请

    西安交通大学,陕西

    尚未开通主页