吴微
个性化签名
- 姓名:吴微
- 目前身份:
- 担任导师情况:
- 学位:
-
学术头衔:
博士生导师
- 职称:-
-
学科领域:
计算数学
- 研究兴趣:
学历: 74-77 吉大本科,78-81 吉大硕士, 85-87 英国牛津大学博士工作: 81-85 吉大数学系讲师,90-96 吉大副教授,96-98。吉大教授,博导,98-现在 大工应用数学系教授,博导。 1/88-8/90 英国巴斯大学博士后, 3/91-7/91 巴斯大学访问,1/96-6/96。美国佐治亚理工大学任教,1/99-6/99 美国北达科达州立大学任教。科研:专著3本(科学出版社,高教出版社,美国Marcel Dekker出版社),正式论文70篇(其中SCI检索论文20篇)。(截至2005年底)主持项目:1.90-93 教委归国人员基金(非线性问题研究)4万。 2.90-92 教委博士后基金(分支理论与计算)1万。3.94-95 第三世界科学院研究基金(局部静态到整体动态分支)$ 0.1万。4.96-98 国家自然科学基金(动力系统计算)4万。5.00-01 教育部青年骨干教师基金(神经网络计算)12万。6.00-02 国家自然科学基金(神经网络计算)10万。7.00-03 国防科工委国防基础科研项目(图象识别形状特征研究)70万。8. 02-07 辽宁省中青年学科带头人基金7.5万。9.02-03 “十五”国家级规划教材匹配(神经网络计算)2万。10. 05-07 国家自然科学基金(神经网络学习算法)16万。奖励:教育部科技进步三等奖(1999),教育部青年骨干教师(2000),辽宁省中青年学科带头人(2001),辽宁省自然科学学术成果一等奖(2003),辽宁省自然科学学术成果一等奖(2004)。学术兼职:国务院学位委员会学科评议组成员。《J.Information & Computational Sciences》《数学研究与评论》《高校计算数学学报》《大连理工大学学报》编委。中国数学会理事,辽宁省数学会副理事长,全国计算数学学会常务理事. 科学出版社《信息与计算科学丛书》编委,高教出版社《信息与计算科学专业系列教材》编委.
-
主页访问
3442
-
关注数
0
-
成果阅读
723
-
成果数
10
【期刊论文】A New Training Algorithm for a Fuzzy Perceptron and its Convergence
吴微, Jie Yang, Wei Wu, , ☆ and Zhiqiong Shao
,-0001,():
-1年11月30日
In this paper, we present a new training algorithm for a fuzzy perceptron. In the case where the dimension of the input vectors is two and the training examples are separable, we can prove a-nite convergence, i.e., the training procedure for the network weights will stop after-nite steps. When the dimension is greater than two, stronger conditions are needed to guarantee the-nite convergence.
-
46浏览
-
0点赞
-
0收藏
-
0分享
-
196下载
-
0评论
-
引用
【期刊论文】Deterministic Convergence of an Online Gradient Method for BP Neural Networks
吴微, Wei Wu, Guorui Feng, Zhengxue Li, and Yuesheng Xu
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL.16, NO.3, MAY 2005,-0001,():
-1年11月30日
Online gradient methods are widely used for training feedforward neural networks. We prove in this paper a convergence theorem for an online gradient method with variable step size for backward propagation (BP) neural networks with a hidden layer. Unlike most of the convergence results that are of probabilistic and nonmonotone nature, the convergence result that we establish here has a deterministic and monotone nature.
-
97浏览
-
0点赞
-
0收藏
-
0分享
-
220下载
-
0评论
-
引用
【期刊论文】Convergence of an Online Gradient Method for BP Neural Networks with Stochastic Inputs☆
吴微, Zhengxue Li, WeiWu, , ☆☆, Guorui Feng, and Huifang Lu
,-0001,():
-1年11月30日
An online gradient method for BP neural networks is presented and discussed. The input training examples are permuted stochastically in each cycle of iteration. A monotonicity and a weak convergence of deterministic nature for the method are proved.
-
43浏览
-
0点赞
-
0收藏
-
0分享
-
179下载
-
0评论
-
引用
【期刊论文】Bifurcation from local local steady-states to global dynamics
吴微, WEI WU, ZHENGXUE Li
,-0001,():
-1年11月30日
A survey is given for some recent developments on bifurcation from local steady-states to global dynamics governed by nonlinear ODE's with Z2 or O(2) symmetries. In particular, we are mainly concerned with a double singular point, which is a Z2 symmetric steady-state possessing a Jacobian with two zero eigenvalues. There exist, under suitable con-ditions, Hopf points and heteroclinic cycles bifurcating from the double singular point. We also considered a triple zero point with an O(2) sym-metry, from which bifurcate standing waves, rotating waves and modu-lated rotating waves.
Bifurcations,, local steady-states,, global dynamics,, Hopf points,, het-eroclinic points,, Z2 and O(, 2), symmetries,, rotating waves.,
-
58浏览
-
0点赞
-
0收藏
-
0分享
-
162下载
-
0评论
-
引用
吴微, Zhengxue Li a, Wei Wu a, *, Yulong Tian b
,-0001,():
-1年11月30日
In this paper, we study the convergence of an online gradient method for feed-forward neural networks. The input training examples are permuted stochastically in each cycle of iteration. A monotonicity and a weak convergence of deterministic nature are proved.
Feedforward neural networks, Online gradient method, Convergence, Stochastic inputs
-
36浏览
-
0点赞
-
0收藏
-
0分享
-
159下载
-
0评论
-
引用
【期刊论文】Finite Convergence of MRI Neural Network for Linearly Separable Training Patterns☆
吴微, Lijun Liu, Wei Wu☆☆
,-0001,():
-1年11月30日
MRI (Madaline Rule I) neural network has wide applications. A finite convergence for the training of MRI neural network is proved for linearly separable training patterns.
-
86浏览
-
0点赞
-
0收藏
-
0分享
-
123下载
-
0评论
-
引用
【期刊论文】Deterministic Convergence of an Online Gradient Method for Neural Networks
吴微, Wei Wu*, and Yuesheng Xuy
,-0001,():
-1年11月30日
The online gradient method has been widely used as a learning algorithm for neural networks. We establish a deterministic convergence of online gradient methods for the training of a class of nonlinear feedforward neural networks when the training examples are linearly independent. We choose the learning rate to be a constant during the training procedure. The monotonicity of the error function in the iteration is proved. A criterion for choosing the learning rate is also provided to guarantee the convergence. Under certain conditions similar to those for the classical gradient methods, an optimal convergence rate for our online gradient methods is proved.
Online stochastic gradient method,, nonlinear feedforward neural networks,, deterministic convergence,, monotonicity,, constant learning rate.,
-
112浏览
-
0点赞
-
0收藏
-
0分享
-
96下载
-
0评论
-
引用
【期刊论文】Hopf bifurcation near a double singular point with Z2-symmetry and X0-breaking*
吴微, Wei WU, and Yi SU
J. Computational and Applied Mathemetics 80 (1997) 277-297.,-0001,():
-1年11月30日
This paper deals with nonlinear equations f (x,λ,α)=0 and the corresponding ODEs xt=f (x,λ,α) satisfying f (0,λ,α)=0 and a Z2-symmetry. In particular, we are interested in Hopf points, which indicate the bifurcation of periodic solutions of xt=f (x,λ,α) from (steady-state) solutions of f (x,λ,α)=0. It is shown that under suitable nondegeneracy conditions, there bifurcate two paths of Hopf points from a double singular point, where x=0 and fx (0,λ,α) has a double zero eigenvalue with one eigenvector symmetric and one anti-symmetric. This result gives a new example of -nding Hopf points through local singular points. Our main tools for analysis are some extended systems, which also provide easily implemented algorithms for the numerical computation of the bifurcating Hopf points. A supporting numerical example for a Brusselator model is also presented.
Hopf bifurcations,, two-dimensional null space,, Z2-symmetry,, X0-braking,, two-parameter nonlinear equations
-
74浏览
-
0点赞
-
0收藏
-
0分享
-
93下载
-
0评论
-
引用
吴微, Wei Wu†, and Zhiqiong Shao
,-0001,():
-1年11月30日
In this paper we prove that the online gradine method for continuous perceptrons coverges in finite steps when the training pat-terns are linearly separable.
Feedoforward Neural networks, Online gradient method, Convergence, Linearly separable, Continuous Perceptrons
-
68浏览
-
0点赞
-
0收藏
-
0分享
-
51下载
-
0评论
-
引用
【期刊论文】Training multilayer perceptrons via minimization of sum of ridge functions
吴微, Wei Wu a, *, Guorui Feng a and Xin Li b
Advances in Computational Mathematics 17: 331-347, 2002.,-0001,():
-1年11月30日
Motivated by the problem of training multilayer perceptrons in neural networks, we consider the problem of minimizing E(x)=n=1 fi (ξi·x), where ξi ∈ Rs, 1≤i≤n, and each fi (ξi·x) is a ridge function. We show that when n is small the problem of minimizing E can be treated as one of minimizing univariate functions, and we use the gradient algorithms for minimizing E when n is moderately large. For largen, we present the online gradient algorithms and especially show the monotonicity and weak convergence of the algorithms.
multilayer perceptrons,, online gradient algorithms,, ridge functions
-
103浏览
-
0点赞
-
0收藏
-
0分享
-
87下载
-
0评论
-
引用