图书介绍

数值最优化:英文本2025|PDF|Epub|mobi|kindle电子书版本百度云盘下载

数值最优化:英文本
  • JorgeNocedal,St 著
  • 出版社: 科学出版计
  • ISBN:7030166752
  • 出版时间:2006
  • 标注页数:636页
  • 文件大小:113MB
  • 文件页数:40179241页
  • 主题词:最优化算法-研究生-教材-英文

PDF下载


点此进入-本书在线PDF格式电子书下载【推荐-云解压-方便快捷】直接下载PDF格式图书。移动端-PC端通用
种子下载[BT下载速度快]温馨提示:(请使用BT下载软件FDM进行下载)软件下载地址页直链下载[便捷但速度慢]  [在线试读本书]   [在线获取解压码]

下载说明

数值最优化:英文本PDF格式电子书版下载

下载的文件为RAR压缩包。需要使用解压软件进行解压得到PDF格式图书。

建议使用BT下载工具Free Download Manager进行下载,简称FDM(免费,没有广告,支持多平台)。本站资源全部打包为BT种子。所以需要使用专业的BT下载软件进行下载。如BitComet qBittorrent uTorrent等BT下载工具。迅雷目前由于本站不是热门资源。不推荐使用!后期资源热门了。安装了迅雷也可以迅雷进行下载!

(文件页数 要大于 标注页数,上中下等多册电子书除外)

注意:本站所有压缩包均有解压码: 点击下载压缩包解压工具

图书目录

1 Introduction1

Mathematical Formulation2

Example:A Transportation Problem4

Continuous versus Discrete Optimization4

Constrained and Unconstrained Optimization6

Global and Local Optimization6

Stochastic and Deterministic Optimization7

Optimization Algorithms7

Convexity8

Notes and References9

2 Fundamentals of Unconstrained Optimization10

2.1 What Is a Solution?13

Recognizing a Local Minimum15

Nonsmooth Problems18

2.2 Overview of Algorithms19

Two Strategies:Line Search and Trust Region19

Search Directions for Line Search Methods21

Models for Trust-Region Methods26

Scaling27

Rates of Convergence28

R-Rates of Convergence29

Notes and References30

Exercises30

3 Line Search Methods34

3.1 Step Length36

The Wolfe Conditions37

The Goldstein Conditions41

Sufficient Decrease and Backtracking41

3.2 Convergence of Line Search Methods43

3.3 Rate of Convergence46

Convergence Rate of Steepest Descent47

Quasi-Newton Methods49

Newton’s Method51

Coordinate Descent Methods53

3.4 Step-Length Selection Algorithms55

Interpolation56

The Initial Step Length58

A Line Search Algorithm for the Wolfe Conditions58

Notes and References61

Exercises62

4 Trust-Region Methods64

Outline of the Algorithm67

4.1 The Cauchy Point and Related Algorithms69

The Cauchy Point69

Improving on the Cauchy Point70

The Dogleg Method71

Two-Dimensional Subspace Minimization74

Steihaug’s Approach75

4.2 Using Nearly Exact Solutions to the Subproblem77

Characterizing Exact Solutions77

Calculating Nearly Exact Solutions78

The Hard Case82

Proof of Theorem 4.384

4.3 Global Convergence87

Reduction Obtained by the Cauchy Point87

Convergence to Stationary Points89

Convergence of Algorithms Based on Nearly Exact Solutions93

4.4 Other Enhancements94

Scaling94

Non-Euclidean Trust Regions96

Notes and References97

Exercises97

5 Conjugate Gradient Methods100

5.1 The Linear Conjugate Gradient Method102

Conjugate Direction Methods102

Basic Properties of the Conjugate Gradient Method107

A Practical Form of the Conjugate Gradient Method111

Rate of Convergence112

Preconditioning118

Practical Preconditioners119

5.2 Nonlinear Conjugate Gradient Methods120

The Fletcher-Reeves Method120

The Polak-Ribiere Method121

Quadratic Termination and Restarts122

Numerical Performance124

Behavior of the Fletcher-Reeves Method124

Global Convergence127

Notes and References131

Exercises132

6 Practical Newton Methods134

6.1 Inexact Newton Steps136

6.2 Line Search Newton Methods139

Line Search Newton-CG Method139

Modified Newton’s Method141

6.3 Hessian Modifications142

Eigenvalue Modification143

Adding a Multiple of the Identity144

Modified Cholesky Factorization145

Gershgorin Modification150

Modified Symmetric Indefinite Factorization151

6.4 Trust-Region Newton Methods154

Newton-Dogleg and Subspace-Minimization Methods154

Accurate Solution of the Trust-Region Problem155

Trust-Region Newton-CG Method156

Preconditioning the Newton-CG Method157

Local Convergence of Trust-Region Newton Methods159

Notes and References162

Exercises162

7 Calculating Derivatives164

7.1 Finite-Difference Derivative Approximations166

Approximating the Gradient166

Approximating a Sparse Jacobian169

Approximating the Hessian173

Approximating a Sparse Hessian174

7.2 Automatic Differentiation176

An Example177

The Forward Mode178

The Reverse Mode179

Vector Functions and Partial Separability183

Calculating Jacobians of Vector Functions184

Calculating Hessians:Forward Mode185

Calculating Hessians:Reverse Mode187

Current Limitations188

Notes and References189

Exercises189

8 Quasi-Newton Methods192

8.1 The BFGS Method194

Properties of the BFGS Method199

Implementation200

8.2 The SR1 Method202

Properties of SR1 Updating205

8.3 The Broyden Class207

Properties of the Broyden Class209

8.4 Convergence Analysis211

Global Convergence of the BFGS Method211

Superlinear Convergence of BFGS214

Convergence Analysis of the SR1 Method218

Notes and References219

Exercises220

9 Large-Scale Quasi-Newton and Partially Separable Optimization222

9.1 Limited-Memory BFGS224

Relationship with Conjugate Gradient Methods227

9.2 General Limited-Memory Updating229

Compact Representation of BFGS Updating230

SR1 Matrices232

Unrolling the Update232

9.3 Sparse Quasi-Newton Updates233

9.4 Partially Separable Functions235

A Simple Example236

Internal Variables237

9.5 Invariant Subspaces and Partial Separability240

Sparsity vs.Partial Separability242

Group Partial Separability243

9.6 Algorithms for Partially Separable Functions244

Exploiting Partial Separability in Newton’s Method244

Quasi-Newton Methods for Partially Separable Functions245

Notes and References247

Exercises248

10 Nonlinear Least-Squares Problems250

10.1 Background253

Modeling,Regression,Statistics253

Linear Least-Squares Problems256

10.2 Algorithms for Nonlinear Least-Squares Problems259

The Gauss-Newton Method259

The Levenberg-Marquardt Method262

Implementation of the Levenberg-Marquardt Method264

Large-Residual Problems266

Large-Scale Problems269

10.3 Orthogonal Distance Regression271

Notes and References273

Exercises274

11 Nonlinear Equations276

11.1 Local Algorithms281

Newton’s Method for Nonlinear Equations281

Inexact Newton Methods284

Broyden’s Method286

Tensor Methods290

11.2 Practical Methods292

Merit Functions292

Line Search Methods294

Trust-Region Methods298

11.3 Continuation/Homotopy Methods304

Motivation304

Practical Continuation Methods306

Notes and References310

Exercises311

12 Theory of Constrained Optimization314

Local and Global Solutions316

Smoothness317

12.1 Examples319

A Single Equality Constraint319

A Single Inequality Constraint321

Two Inequality Constraints324

12.2 First-Order Optimality Conditions327

Statement of First-Order Necessary Conditions327

Sensitivity330

12.3 Derivation of the First-Order Conditions331

Feasible Sequences332

Characterizing Limiting Directions:Constraint Qualifications336

Introducing Lagrange Multipliers339

Proof of Theorem 12.1341

12.4 Second-Order Conditions342

Second-Order Conditions and Projected Hessians348

Convex Programs350

12.5 Other Constraint Qualifications351

12.6 A Geometric Viewpoint354

Notes and References357

Exercises358

13 Linear Programming&The Simplex Method362

Linear Programming364

13.1 Optimality and Duality366

Optimality Conditions366

The Dual Problem367

13.2 Geometry of the Feasible Set370

Basic Feasible Points370

Vertices of the Feasible Polytope372

13.3 The Simplex Method374

Outline of the Method374

Finite Termination of the Simplex Method377

A Single Step of the Method378

13.4 Linear Algebra in the Simplex Method379

13.5 Other (Important) Details383

Pricing and Selection of the Entering Index383

Starting the Simplex Method386

Degenerate Steps and Cycling389

13.6 Where Does the Simplex Method Fit?391

Notes and References392

Exercises393

14 Linear Programming:Interior-Point Methods394

14.1 Primal-Dual Methods396

Outline396

The Central Path399

A Primal-Dual Framework401

Path-Following Methods402

14.2 A Practical Primal-Dual Algorithm404

Solving the Linear Systems408

14.3 Other Primal-Dual Algorithms and Extensions409

Other Path-Following Methods409

Potential-Reduction Methods409

Extensions410

14.4 Analysis of Algorithm 14.2411

Notes and References416

Exercises417

15 Fundamentals of Algorithms for Nonlinear Constrained Optimization420

Initial Study of a Problem422

15.1 Categorizing Optimization Algorithms423

15.2 Elimination of Variables426

Simple Elimination for Linear Constraints427

General Reduction Strategies for Linear Constraints430

The Effect of Inequality Constraints434

15.3 Measuring Progress:Merit Functions434

Notes and References437

Exercises438

16 Quadratic Programming440

An Example:Portfolio Optimization442

16.1 Equality-Constrained Quadratic Programs443

Properties of Equality-Constrained QPs444

16.2 Solving the KKT System447

Direct Solution of the KKT System448

Range-Space Method449

Null-Space Method450

A Method Based on Conjugacy452

16.3 Inequality-Constrained Problems453

Optimality Conditions for Inequality-Constrained Problems454

Degeneracy455

16.4 Active-Set Methods for Convex QP457

Specification of the Active-Set Method for Convex QP461

An Example463

Further Remarks on the Active-Set Method465

Finite Termination of the Convex QP Algorithm466

Updating Factorizations467

16.5 Active-Set Methods for Indefinite QP470

Illustration472

Choice of Starting Point474

Failure of the Active-Set Method475

Detecting Indefiniteness Using the LBLT Factorization475

16.6 The Gradient-Projection Method476

Cauchy Point Computation477

Subspace Minimization480

16.7 Interior-Point Methods481

Extensions and Comparison with Active-Set Methods484

16.8 Duality484

Notes and References485

Exercises486

17 Penalty,Barrier,and Augmented Lagrangian Methods490

17.1 The Quadratic Penalty Method492

Motivation492

Algorithmic Framework494

Convergence of the Quadratic Penalty Function495

17.2 The Logarithmic Barrier Method500

Properties of Logarithmic Barrier Functions500

Algorithms Based on the Log-Barrier Function505

Properties of the Log-Barrier Function and Framework 17.2507

Handling Equality Constraints509

Relationship to Primal-Dual Methods510

17.3 Exact Penalty Functions512

17.4 Augmented Lagrangian Method513

Motivation and Algorithm Framework513

Extension to Inequality Constraints516

Properties of the Augmented Lagrangian518

Practical Implementation521

17.5 Sequential Linearly Constrained Methods523

Notes and References525

Exercises526

18 Sequential Quadratic Programming528

18.1 Local SQP Method530

SQP Framework531

Inequality Constraints533

IQP vs.EQP534

18.2 Preview of Practical SQP Methods534

18.3 Step Computation536

Equality Constraints536

Inequality Constraints538

18.4 The Hessian of the Quadratic Model539

Full Quasi-Newton Approximations540

Hessian of Augmented Lagrangian541

Reduced-Hessian Approximations542

18.5 Merit Functions and Descent544

18.6 A Line Search SQP Method547

18.7 Reduced-Hessian SQP Methods548

Some Properties of Reduced-Hessian Methods549

Update Criteria for Reduced-Hessian Updating550

Changes of Bases551

A Practical Reduced-Hessian Method552

18.8 Trust-Region SQP Methods553

Approach Ⅰ:Shifting the Constraints555

Approach Ⅱ:Two Elliptical Constraints556

Approach Ⅲ:Sl 1 QP (Sequential l 1 Quadratic Programming)557

18.9 A Practical Trust-Region SQP Algorithm560

18.10 Rate of Convergence563

Convergence Rate of Reduced-Hessian Methods565

18.11 The Maratos Effect567

Second-Order Correction570

Watchdog (Nonmonotone) Strategy571

Notes and References573

Exercises574

A Background Material576

A.1 Elements of Analysis,Geometry,Topology577

Topology of the Euclidean Space Rn577

Continuity and Limits580

Derivatives581

Directional Derivatives583

Mean Value Theorem584

Implicit Function Theorem585

Geometry of Feasible Sets586

Order Notation591

Root-Finding for Scalar Equations592

A.2 Elements of Linear Algebra593

Vectors and Matrices593

Norms594

Subspaces597

Eigenvalues,Eigenvectors,and the Singular-Value Decomposition598

Determinant and Trace599

Matrix Factorizations:Cholesky,LU,QR600

Sherman-Morrison-Woodbury Formula605

Interlacing Eigenvalue Theorem605

Error Analysis and Floating-Point Arithmetic606

Conditioning and Stability608

References611

Index625

热门推荐