Implementação de um método de Lagrangianos aumentados com informação de primeira ordem

Autores/as

  • E. G. Birgin Universidade de São Paulo - USP
  • D. S. Marcondes Universidade de São Paulo - USP

DOI:

https://doi.org/10.5540/03.2025.011.01.0410

Palabras clave:

Otimização não linear, Métodos de Lagrangianos Aumentados, Experimentos Numéricos, MINRES, Métodos de Gradientes Conjugados

Resumen

Métodos de Lagrangianos Aumentados são amplamente estudados e usados para solucionar problemas de Otimização não linear com restrições de igualdade e desigualdades. Algencan é um algoritmo de Lagrangiano Aumentado com salvaguardas bem estabelecido com vários resultados de complexidade, convergências e de desempenho computacional. Os subproblemas do Algencan são resolvidos utilizando um método de restrições ativas chamado de Gencan. Neste trabalho mostramos resultados numéricos para duas modificações do Gencan utilizando somente informação de primeira ordem e comparamos com a versão corrente de Gencan.

Descargas

Los datos de descargas todavía no están disponibles.

Citas

R. Andrade, E. G. Birgin, J. M. Martínez e L. Martínez. “PACKMOL: A package for building initial configurations for molecular dynamics simulations”. Em: Journal of Computational Chemistry 30 (2009), pp. 2157–2164.

R. Andreani, E. G. Birgin, J. M. Martínez e M. L. Schuverdt. “Augmented Lagrangian methods under the Constant Positive Linear Dependence constraint qualification”. Em: Mathematical Programming 111 (2008), pp. 5–32.

R. Andreani, E. G. Birgin, J. M. Martínez e M. L. Schuverdt. “On Augmented Lagrangian Methods with General Lower-Level Constraints”. Em: SIAM Journal on Optimization 18 (2008), pp. 1286–1309.

R. Andreani, G. Haeser, M. L. Schuverdt e P. J. S. Silva. “Two New Weak Constraint Qualifications and Applications”. Em: SIAM Journal on Optimization 22 (2012), pp. 1109–1135.

E. G. Birgin e J. M. Martínez. “Complexity and performance of an Augmented Lagrangian algorithm”. Em: Optimization Methods and Software 35 (2020), pp. 885–920.

E. G. Birgin e J. M. Martínez. “Improving ultimate convergence of an augmented Lagrangian method”. Em: Optimization Methods and Software 23 (2008), pp. 177–195.

E. G. Birgin e J. M. Martínez. “Large-Scale Active-Set Box-Constrained Optimization Method with Spectral Projected Gradients”. Em: Computational Optimization and Applications 23 (2002), pp. 101–125.

E. G. Birgin e J. M. Martínez. “On Regularization and Active-set Methods with Complexity for Constrained Optimization”. Em: SIAM Journal on Optimization 28 (2018), pp. 1367–1395.

E. G. Birgin e J. M. Martínez. Practical Augmented Lagrangian Methods for Constrained Optimization. Fundamentals of Algorithms. Philadelphia: Society for Industrial and Applied Mathematics, 2014.

E. G. Birgin e J. M. Martínez. “Structured minimal-memory inexact quasi-Newton method and secant preconditioners for augmented Lagrangian optimization”. Em: Computational Optimization and Applications 39 (2008), pp. 1–16.

Y. H. Dai e Y. Yuan. “A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property”. Em: SIAM Journal on Optimization 10 (1999), pp. 177–182.

R. Fletcher e C. M. Reeves. “Function minimization by conjugate gradients”. Em: The Computer Journal 7 (1964), pp. 149–154.

N. I. M. Gould, D. Orban e Ph. L. Toint. “CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization”. Em: Computational Optimization and Applications 60 (2014), pp. 545–557.

W. W. Hager e H. Zhang. “A new conjugate gradient method with guaranteed descent and an efficient line search”. Em: SIAM Journal on Optimization 16 (2005), pp. 170–192.

W. W. Hager e H. Zhang. “Algorithm 851: CG DESCENT, a conjugate gradient method with guaranteed descent”. Em: ACM Transactions on Mathematical Software 32 (2006), pp. 113–137.

M. R. Hestenes. “Multiplier and gradient methods”. Em: Journal of Optimization Theory and Applications 4 (1969), pp. 303–320.

A. Lim e F. Roosta. Complexity Guarantees for Nonconvex Newton-MR Under Inexact Hessian Information. 2023. arXiv: 2308.09912.

Y. Liu, M. W. Mahoney, F. Roosta e P. Xu. “Newton-MR: Inexact Newton Method with minimum residual sub-problem solver”. Em: EURO Journal on Computational Optimization 10 (2022), p. 100035.

Y. Liu e F. Roosta. A Newton-MR algorithm with complexity guarantees for non-convex smooth unconstrained optimization. 2023. arXiv: 2208.07095.

Y. Liu e F. Roosta. “Convergence of Newton-MR under Inexact Hessian Information”. Em: SIAM Journal on Optimization 31 (2021), pp. 59–90.

Y. Liu e F. Roosta. “MINRES: From Negative Curvature Detection to Monotonicity Properties”. Em: SIAM Journal on Optimization 32 (2022), pp. 2636–2661.

J. Nocedal e S. J. Wright. Numerical Optimization. 1e edition. New York, NY: Springer, 1999.

C. C. Paige e M. A. Saunders. “Solution of Sparse Indefinite Systems of Linear Equations”. Em: SIAM Journal on Numerical Analysis 12 (1975), pp. 617–629.

E. Polak e G. Ribiere. “Note sur la convergence de méthodes de directions conjuguées”. Em: R.I.R.O. 3 (1969), pp. 35–43.

B.T. Polyak. “The conjugate gradient method in extremal problems”. Em: USSR Computational Mathematics and Mathematical Physics 9 (1969), pp. 94–112.

M. J. D. Powell. “A method for nonlinear constraints in minimization problems”. Em: Optimization. Ed. por R. Fletcher. New York, NY: Academic Press, 1969, pp. 283–298.

R. T. Rockafellar. “The multiplier method of Hestenes and Powell applied to convex programming”. Em: Journal of Optimization Theory and Applications 12 (1973), pp. 555–562.

A. Wächter e L. T. Biegler. “On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming”. Em: Mathematical Programming 106 (2006), pp. 25–57

Publicado

2025-01-20

Número

Sección

Trabalhos Completos