徐宗本
长期从事智能信息处理基础理论研究。
个性化签名
 姓名：徐宗本
 目前身份：
 担任导师情况：
 学位：

学术头衔：
“973”、“863”首席科学家， 博士生导师
 职称：

学科领域：
应用数学
 研究兴趣：长期从事智能信息处理基础理论研究。
徐宗本，生于1955年，汉族，教授、博士生导师。1987年在西安交通大学数学系获博士学位，英国Strathclyde大学博士后，现任西安交通大学副校长，西安交通大学信息与系统科学研究所所长、国务院学位委员会数学学科评议组成员、信息学部计算机学科评审组成员、教育部数学与统计学教学指导委员会副主任委员、数学类专业教学指导分委员会主任委员、博士生导师，曾任西安交通大学理学院院长，现为国家重点基础研究计划（973）“基于视觉认知的非结构化信息处理基础理论与关键技术”首席科学家。
徐宗本教授长期从事智能信息处理基础理论研究。在数据建模的核心基础，特别是机器学习领域取得系统性创新成果, 并做出了重大贡献。将数据建模从“辅助作图”层次提升到“认知模拟”层次,系统提出了基于视觉认知的数据建模新原理与新方法；发现非欧氏框架下的“类二项式公式”数据建模新工具，奠定了机器学习正则化方法的分析基础，被国外称之为“XuRoach”公式；解决了神经网络系统、模拟进化计算中的一些重要基础问题，相关论文被Nature Review作为SEC代表性文献引用；首次提出用“种群多样度”概念刻画SEC假收敛的起因与特征，被相关文献称为是“梁高徐度量”。发表学术论文172篇,SCI索引114篇, SCI引用782次 (他引642次),平均被引用6.86次/篇。研究结果《基于认知与非欧式框架的数据建模基础理论研究》曾获国家自然科学二等奖（2007）, 2008年获得我国应用数学最高奖项CSIAM苏步青数学奖。

主页访问
2758

关注数
2

成果阅读
275

成果数
9
徐宗本， Zongben Xu& Hai Zhang， †& Yao Wang & Xiangyu Chang
Sci China Ser FInf Sci, Jan. 2009, vol. 52. no.1, 19，0001，（）：
1年11月30日
In this paper we propose an L1/2 regularizer which has a nonconvex penalty. The L1/2 regularizer is shown to have many promising properties such as unbiasedness, sparsity and Oracle properties. A reweighed iterative algorithm is proposed so that the solution of the L1/2 regularizer can be solved through transforming it into the solution of a series of L1 regularizers. The solution of the L1/2 regularizer is more sparse than that of the L1 regularizer, while solving the L1/2 regularizer ismuch simpler than solving the L0 regularizer. The experiments show that the L1/2 regularizer is very useful and efficient, and can be taken as a representative of the Lp(0<p<1) regularizer.
machine learning,， variable selection,， regularizer,， compressed sensing.，

148浏览

0点赞

0收藏

0分享

140下载

0评论

引用
【期刊论文】The generalization performance of ERM algorithm with strongly mixing observations
徐宗本， Bin Zou， Luoqing Li， Zongben Xu
Published online: 07 February 2009，0001，（）：
1年11月30日
The generalization performance is the main concern of machine learning theoretical research. The previous main bounds describing the generalization ability of the Empirical Risk Minimization (ERM) algorithm are based on independent and identically distributed (i.i.d.) samples. In order to study the generalization performance of the ERM algorithm with dependent observations, we first establish the exponential bound on the rate of relative uniform convergence of the ERM algorithm with exponentially strongly mixing observations, and then we obtain the generalization bounds and prove that the ERM algorithm with exponentially strongly mixing observations is consistent. The main results obtained in this paper not only extend the previously known results for i.i.d. observations to the case of exponentially strongly mixing observations, but also improve the previous results for strongly mixing samples. Because the ERM algorithm is usually very timeconsuming and overfitting may happen when the complexity of the hypothesis space is high, as an application of our main results we also explore a new strategy to implement the ERM algorithm in high complexity hypothesis space.
Generalization performance， ERM principle， Relative uniform convergence， Exponentially strongly mixing

30浏览

0点赞

0收藏

0分享

94下载

0评论

引用
【期刊论文】The essential order of approximation for nearly exponential type neural networks
徐宗本， XU Zongben & WANG Jianjun
Science in China Series F: Information Sciences 2006 Vol. 49 No.4 446460，0001，（）：
1年11月30日
For the nearly exponential type of feedforward neural networks (neFNNs), it is revealed the essential order of their approximation. It is proven that for any continuous function defined on a compact set of Rd, there exists a threelayer neFNNs with fixed number of hidden neurons that attain the essential order. When the function to be approximated belongs to the αLipschitz family (0<α≤2), the essential order of approximation is shown to be O(nα) where n is any integer not less than the reciprocal of the predetermined approximation error. The upper bound and lower bound estimations on approximation precision of the neFNNs are provided. The obtained results not only characterize the intrinsic property of approximation of the neFNNs, but also uncover the implicit relationship between the precision (speed) and the number of hidden neurons of the neFNNs.
nearly exponential type neural networks,， the essential order of approximation,， the modulus of smoothness of a multivariate function.，

29浏览

0点赞

0收藏

0分享

87下载

0评论

引用
【期刊论文】A comparative study of two modeling approaches in neural networks
徐宗本， ZongBen Xua， Hong Qiaob， Jigen Penga， Bo Zhangc， *
Neural Networks 17(2004)7385，0001，（）：
1年11月30日
The neuron state modeling and the local field modeling provides two fundamental modeling approaches to neural network research, based on which a neural network system can be called either as a static neural network model or as a local field neural network model. These two models are theoretically compared in terms of their trajectory transformation property, equilibrium correspondence property, nontrivial attractive manifold property, global convergence as well as stability in many different senses. The comparison reveals an important stability invariance property of the two models in the sense that the stability (in any sense) of the static model is equivalent to that of a subsystem deduced from the local field model when restricted to a specific manifold. Such stability invariance property lays a sound theoretical foundation of validity of a useful, crossfertilization type stability analysis methodology for various neural network models.
Static neural network modeling， Local field neural network modeling， Recurrent neural networks， Stability analysis， Asymptotic stability， Exponential stability， Global convergence， Globally attractive

8浏览

0点赞

0收藏

0分享

65下载

0评论

引用
【期刊论文】A New Model of Simulated Evolutionary ComputationConvergence Analysis and Specifications
徐宗本， KwongSak Leung， Senior Member， IEEE， QiHong Duan， ZongBen Xu， and C. K. Wong， Fellow
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 5, NO.1, FEBRUARY 2001，0001，（）：
1年11月30日
There have been various algorithms designed for simulating natural evolution. This paper proposes a new simulated evolutionary computation model called the abstract evolutionary algorithm (AEA), which unifies most of the currently known evolutionary algorithms and describes the evolution as an abstract stochastic process composed of two fundamental operators: selection and evolution operators. By axiomatically characterizing the properties of the fundamental selection and evolution operators, several general convergence theorems and convergence rate estimations for the AEA are established. The established theorems are applied to a series of known evolutionary algorithms, directly yielding new convergence conditions and convergence rate estimations of various specific genetic algorithms and evolutionary strategies. The present work provides a significant step toward the establishment of a unified theory of simulated evolutionary computation.
Aggregating and scattering rate,， evolutionary strategy,， genetic algorithm,， selection intensity,， selection pressure,， stochastic process.，

14浏览

0点赞

0收藏

0分享

116下载

0评论

引用
徐宗本， Hong Qiao， Jigen Peng， and ZongBen Xu
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO.2, MARCH 2001，0001，（）：
1年11月30日
In this paper, a new concept called nonlinear measure is introduced to quantify stability of nonlinear systems in the way similar to the matrix measure for stability of linear systems. Based on the new concept, a novel approach for stability analysis of neural networks is developed. With this approach, a series of new sufficient conditions for global and local exponential stability of Hopfield type neural networks is presented, which generalizes those existing results. By means of the introduced nonlinear measure, the exponential convergence rate of the neural networks to stable equilibrium point is estimated, and, for local stability, the attraction region of the stable equilibrium point is characterized. The developed approach can be generalized to stability analysis of other general nonlinear systems.
Global exponential stability,， Hopfieldtype neural networks,， local exponential stability,， matrix measure,， nonlinear measures.，

13浏览

0点赞

0收藏

0分享

38下载

0评论

引用
【期刊论文】Asymmetric Hopfieldtype Networks: Theory and Applications
徐宗本， ZONGBEN Xu， GuoQING HU AND CHUNGPING KWONG
Neural Networks Vol. 9 No.3 pp. 483501, 1996，0001，（）：
1年11月30日
The Hopfieldtype networks with asymmetric interconnections are studied from the standpoint of taking them as computational models. Two fundamental properties, feasibility and reliability, of the networks related to their use are established with a newlydeveloped convergence principle and a classification theory on energy functions. The convergence principle generalizes that previously known for symmetric networks and underlies the feasibility. The classification theory, which categorizes the traditional energy functions into regular, normal and complete ones according to their roles played in connection with the corresponding networks, implies that the reliability and high efficiency of the networks can follow respectively from the regularity and the normality of the corresponding energy functions. The theories developed have been applied to solve a classical NPhard graph theory problem: finding the maximal independent set of a graph. Simulations demonstrate that the algorithms deduced from the asymmetric theories outperform those deduced from the symmetric theory.
Asymmetric Hopfieldtype networks,， Convergence principle,， Classification theory on energy functions,， Regular and normal correspondence,， Maximal independent set problem,， Combinatorial optimization.，

10浏览

0点赞

0收藏

0分享

17下载

0评论

引用
【期刊论文】A Decomposition Principle for Complexity Reduction of Artificial Neural Networks
徐宗本， ZONGBEN XU AND CHUNGPING KWONG
Neural Networks Vol. 9 No.6 pp. 9991016, 1996，0001，（）：
1年11月30日
A decomposition principle is developed for systematic determination of the dimensionality and the connections of Hopfieldtype associative memory networks. Given a set of high dimensional prototype vectors of given memory objects, we develop decomposition algorithms to extract a set o flower dimensional key features of the pattern vectors. Every key feature can be used to build an associative memory with the lowest complexity, and more than one key feature can be simultaneously used to build networks with higher recognition accuracy. In the latter case, we further propose a "decomposed neural network" based on a new encoding scheme to reduce the network complexity. In contrast to the original Hopfield network, the decomposed networks not only increase the network's storage capacity, but also reduce the network's connection complexity from quadratic to linear growth with the network dimension. Both theoretical analysis and simulation results demonstrate that the proposed principle is powerful.
Decomposition principle,， Hopfieldtype networks,， Interpolation operator,， Best approximation projection,， Associative memories,， Elementary matrix transformation

11浏览

0点赞

0收藏

0分享

27下载

0评论

引用
【期刊论文】Neural Networks for Convex Hull Computation
徐宗本， Yee Leung， JiangShe Zhang， and ZongBen Xu
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 8, NO.3, MAY 1997，0001，（）：
1年11月30日
Computing convex hull is one of the central problems账in various applications of computational geometry. In this paper, a convex hull computing neural network (CHCNN) is developed to solve the related problems in the Ndimensional spaces. The algorithm is based on a twolayered neural network, topologically similar to ART, with a newly developed adaptive training strategy called excited learning. The CHCNN provides a parallel online and realtime processing of data which, after training, yields two closely related approximations, one from within and one from outside, of the desired convex hull. It is shown that accuracy of the approximate convex hulls obtained is around O[K1=(N1)], where K is the number of neurons in the output layer of the CHCNN. When K is taken to be sufficiently large, the CHCNN can generate any accurate approximate convex hull. We also show that an upper bound exists such that the CHCNN will yield the precise convex hull when K is larger than or equal to this bound. A series of simulations and applications is provided to demonstrate the feasibility, effectiveness, and high efficiency of the proposed algorithm.
ARTlike neural network,， computational geometry,， convex hull computation,， excited learning.，

12浏览

0点赞

0收藏

0分享

126下载

0评论

引用