-
>
宇宙、量子和人類心靈
-
>
氣候文明史
-
>
南極100天
-
>
考研數學專題練1200題
-
>
希格斯:“上帝粒子”的發明與發現
-
>
神農架疊層石:10多億年前遠古海洋微生物建造的大堡礁
-
>
聲音簡史
數值最優化 版權信息
- ISBN:9787030166753
- 條形碼:9787030166753 ; 978-7-03-016675-3
- 裝幀:一般膠版紙
- 冊數:暫無
- 重量:暫無
- 所屬分類:>>
數值最優化 本書特色
適讀人群 :數學專業高年級本科生,運籌學、應用數學等相關專業研究生 本書運籌學、計算數學高年級本科生或研究生必讀書目,是數之**化一部經典之作。
數值最優化 內容簡介
本書作者現任美國西北大學教授,多種國際雜志的主編、副主編。作者根據在教學、研究和咨詢中的經驗,寫了這本適合學生和實際工作者的書。本書提供連續優化中大多數有效方法的全面的新的論述。每一章從基本概念開始,逐步闡述當前可用的技術。 本書強調實用方法,包含大量圖例和練習,適合廣大讀者閱讀,可作為工程、運籌學、數學、計算機科學以及商務方面的研究生教材,也可作為該領域的科研人員和實際工作人員的手冊。 總之,作者力求本書閱讀性強,內容豐富,論述嚴謹,能揭示數值實用價值。
數值最優化 目錄
1 Introduction
Mathematical Formulation
Example: A Transportation Problem
Continuous versus Discrete Optimization
Constrained and Unconstrained Optimization
Global and Local Optimization
Stochastic and Deterministic Optimization
Optimization Algorithms
Convexity
Notes and References
2 Fundamentals of Unconstrained Optimization
2.1 What Is a Solution?
Recognizing a Local Minimum
Nonsmooth Problems
2.2 Overview of Algorithms
Two Strategies: Line Search and Trust Region
Search Directions for Line Search Methods
Models for Trust—Region Methods
Scaling
Rates of Convergence
R—Rates of Convergence
Notes and References
Exercises
3 Line Search Methods
3.1 Step Length
The Wolfe Conditions
The Goldstein Conditions
Sufficient Decrease and Backtracking
3.2 Convergence of Line Search Methods
3.3 Rate of Convergence
Convergence Rate of Steepest Descent
Quasi—Newton Methods
Newton's Method
Coordinate Descent Methods
3.4 Step—Length Selection Algorithms
Interpolation
The Initial Step Length
A Line Search Algorithm for the Wolfe Conditions
Notes and References
Exerases
4 Trust—Region Methods
Outline of the Algorithm
4.1 The Cauchy Point and Related Algorithms
The Cauchy Point
Improving on the Cauchy Point
The DoglegMethod
Two—Dimensional Subspace Minimization
Steihaug's Approach
4.2 Using Nearly Exact Solutions to the Subproblem
Characterizing Exact Solutions
Calculating Nearly Exact Solutions
The Hard Case
Proof of Theorem 4.3
4.3 Global Convergence
Reduction Obtained by the Cauchy Point
Convergence to Stationary Points
Convergence of Algorithms Based on Nearly Exact Solutions
4.4 Other Enhancements
Scaling
Non—Euclidean Trust Regions
Notes and References
Exercises
5 Conjugate Gradient Methods
5.1 The Linear Conjugate Gradient Method
Conjugate Direction Methods
Basic Properties of the Conjugate Gradient Method
A Practical Form of the Conjugate Gradient Method
Rate of Convergence
Preconditioning
Practical Preconditioners
5.2 Nonlinear Conjugate Gradient Methods
The Fletcher—Reeves Method
The Polak—Ribiere Method
Quadratic Termination and Restarts
Numerical Performance
Behavior of the Fletcher—Reeves Method
Global Convergence
Notes and References
Exerases
6 Practical Newton Methods
6.1 Inexact Newton Steps
6.2 Line Search Newton Methods
Line Search Newton—CG Method
Modified Newton's Method
6.3 Hessian Modifications
Eigenvalue Modification
Adding a Multiple of the Identity
Modified Cholesky Factorization
Gershgorin Modification
Modified Symmetric Indefinite Factorization
6.4 Trust—Region Newton Methods
Newton—Dogleg and Subspace—Minimization Methods
Accurate Solution of the Trust—Region Problem
Trust—Region Newton—CG Method
Preconditioning the Newton—CG Method
Local Convergence of Trust—Region Newton Methods
Notes and References
Exerases
7 Calculating Derivatives
7.1 Finite—Difference Derivative Approximations
Approximating the Gradient
Approximating a Sparse Jacobian
Approximatingthe Hessian
Approximating a Sparse Hessian
7.2 Automatic Differentiation
An Example
The Forward Mode
The Reverse Mode
Vector Functions and Partial Separability
Calculating Jacobians of Vector Functions
Calculating Hessians: Forward Mode
Calculating Hessians: Reverse Mode
Current Limitations
Notes and References
Exercises
8 Quasi—Newton Methods
8.1 The BFGS Method
Properties ofthe BFGS Method
Implementation
8.2 The SR1 Method
Properties of SRl Updating
8.3 The Broyden Class
Properties ofthe Broyden Class
8.4 Convergence Analysis
Global Convergence ofthe BFGS Method
Superlinear Convergence of BFGS
Convergence Analysis of the SR1 Method
Notes and References
Exercises
9 Large—Scale Quasi—Newton and Partially Separable Optimization
9.1 Limited—Memory BFGS
Relationship with Conjugate Gradient Methods
9,2 General Limited—Memory Updating
Compact Representation of BFGS Updating
SR1 Matrices
Unrolling the Update
9.3 Sparse Quasi—Newton Updates
9.4 Partially Separable Functions
A Simple Example
Internal Variables
9.5 Invariant Subspaces and Partial Separability
Sparsity vs.Partial Separability
Group Partial Separability
9.6 Algorithms for Partially Separable Functions
Exploiting Partial Separabilityin Newton's Method
Quasi—Newton Methods for Partially Separable Functions
Notes and References
Exercises
……
10 Nonlinear Least—Squares Problems
11 Nonlinear Equations
12 Theory of Constrained Optimization
13 Linear Programming: The Simplex Method
14 Linear Programming:Interior—Point Methods
15 Fundamentals of Algorithms for Nonlinear Constrained Optimization
16 Quadratic Programnung
17 Penalty, Barrier, and Augmented Lagrangian Methods
18 Sequential Quadratic Programming
A Background Material
References
Index
數值最優化 作者簡介
作者現任美國西北大學教授,多種國際**雜志的主編、副主編。作者根據在教學、研究和咨詢中的經驗,寫了這本適合學生和實際工作者的書。
- >
【精裝繪本】畫給孩子的中國神話
- >
人文閱讀與收藏·良友文學叢書:一天的工作
- >
唐代進士錄
- >
月亮與六便士
- >
苦雨齋序跋文-周作人自編集
- >
名家帶你讀魯迅:故事新編
- >
自卑與超越
- >
推拿