您当前所在位置: 首页 > 学者
在线提示

恭喜!关注成功

在线提示

确认取消关注该学者?

邀请同行关闭

只需输入对方姓名和电子邮箱,就可以邀请你的同行加入中国科技论文在线。

真实姓名:

电子邮件:

尊敬的

我诚挚的邀请你加入中国科技论文在线,点击

链接,进入网站进行注册。

添加个性化留言

已为您找到该学者20条结果 成果回收站

上传时间

2007年11月07日

【期刊论文】An Aftertreatment Technique for Improving the Accuracy of Adornian’s Decomposition Method

焦永昌, Y. C. JIAO, Y. YAMAMOTO, C. DANG, Y. HAO

Computers and Mathematics with Applications 43 (2002) 783-798,-0001,():

-1年11月30日

摘要

Adomian’s decomposition method (ADM) is a nonnumerical method which can be adapted for solving nonlinear ordinary differential equations. In this paper, the principle of the decomposition method is described, and its advantages as well as drawbacks are discussed. Then an aftertreatment technique (AT) is proposed, which yields the analytic approximate solution with fast convergence rate and high accurscy through the application of Padé approximation to the series solution derived from ADM. Some concrete examples are also studied to show with numerical results how the AT works efficiently.

Adomian’s decomposition method, Aftertreatment technique, Ordinary differential equations, Padé approximant, Mathematics

上传时间

2007年11月07日

【期刊论文】The Solution of the One-Dimensional Nonlinear Poisson’s Equations by the Decomposition Method

焦永昌, Yong-Chang, JIAO, Chuang-yin Dang, Yue Hao

,-0001,():

-1年11月30日

摘要

The decomposition method is a nonnumerical method of solving strongly nonlinear differential equations. In this paper, the method is adapted for the solution of the one-dimensional nonlinear Poisson’s equations governing the linearly graded p-n junctions in semiconductor devices, and the error analysis for the approximate analytic solutions obtained by the decomposition method is carried out. The simulation results show that the solutions obtained by the method are accurate and reliable, and that the quantitative analysis of the linearly graded p-n junctions can be conducted. This work indicates that the decomposition method has some advantages, which opens up a new way for the numerical analysis of semiconductor devices.

Decomposition method, One-dimensional nonlinear Poisson’s equation, Approximate analytic solutions, Linearly graded p-n junctions

上传时间

2007年11月07日

【期刊论文】Variable Programming: A Generalized Minimax Problem. Part I: Models and Theory

焦永昌, YONG-CHANG JIAO, YEE LEUNG, ZONGBEN XU, JIANG-SHE ZHANG

Computational Optimization and Applications, 30, 229-261, 2005,-0001,():

-1年11月30日

摘要

In this two-part series of papers, a new generalized minimax optimization model, termed variable programming (VP), is developed to solve dynamically a class of multi-objective optimization problems with non-decomposable structure. It is demonstrated that such type of problems is more general than existing optimization models. In this part, the VP model is proposed first, and the relationship between variable programming and the general constrained nonlinear programming is established. To illustrate its practicality, problems on investment and the low-side-lobe conformal antenna array pattern synthesis to which VP can be appropriately applied are discussed for substantiation. Then, theoretical underpinnings of the VP problems are established. Difficulties in dealing with the VP problems are discussed. With some mild assumptions, the necessary conditions for the unconstrained VP problems with arbitrary and specific activated feasible sets are derived respectively. The necessary conditions for the corresponding constrained VP problems with the mild hypotheses are also examined. Whilst discussion in this part is concentrated on the formulation of the VP model and its theoretical underpinnings, construction of solution algorithms is discussed in Part II.

variable programming, minimax, multiobjective optimization, nonlinear programming, necessary condition

上传时间

2007年11月07日

【期刊论文】Variable Programming: A Generalized Minimax Problem. Part II: Algorithms

焦永昌, YONG-CHANG JIAO, YEE LEUNG, ZONGBEN XU, JIANG-SHE ZHANG

Computational Optimization and Applications, 30, 263-295, 2005,-0001,():

-1年11月30日

摘要

In this part of the two-part series of papers, algorithms for solving some variable programming (VP) problems proposed in Part I are investigated. It is demonstrated that the non-differentiability and the discontinuity of the maximum objective function, as well as the summation objective function in the VP problems constitute difficulty in finding their solutions. Based on the principle of statistical mechanics, we derive smooth functions to approximate these non-smooth objective functions with specific activated feasible sets. By transforming the minimax problem and the corresponding variable programming problems into their smooth versions we can solve the resulting problems by some efficient algorithms for smooth functions. Relevant theoretical underpinnings about the smoothing techniques are established. The algorithms, in which the minimization of the smooth functions is carried out by the standard quasi-Newton method with BFGS formula, are tested on some standard minimax and variable programming problems. The numerical results show that the smoothing techniques yield accurate optimal solutions and that the algorithms proposed are feasible and efficient.

variable programming, minimax, statistical mechanics principle, smooth optimization

上传时间

2007年11月07日

【期刊论文】A New Gradient-Based Neural Network for Solving Linear and Quadratic Programming Problems

焦永昌, Yee Leung, Kai-Zhou Chen, Yong-Chang Jiao, Xing-Bao Gao, amd Kwong Sak Leung, Senior Member, IEEE

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001,-0001,():

-1年11月30日

摘要

In this paper, a new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming (LP) and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality (including nonnegativity) constraints are properly handled. The theory of the proposed network is rigorous and the performance is much better. The simulation results also showthat the proposed neural network is feasible and efficient.

Asymtoptic stability, convergence, duality theory, linear programming (, LP), , neural network, quadratic programming (, QP),

合作学者

  • 焦永昌 邀请

    西安电子科技大学,陕西

    尚未开通主页