On the Delayed Weighted Gradient Method with Simultaneous Step-Size Search

Autores

  • Hugo Lara Urdaneta Federal University of Santa Catarina, SC.
  • Rafael Aleixo Federal University of Santa Catarina, SC.

DOI:

https://doi.org/10.5540/03.2022.009.01.0288

Palavras-chave:

Gradient methods, convex quadratic optimization, Krylov subspace methods, DWGM.

Resumo

In this article it is presented a two step rst order algorithm, based on bidimensional minimization, to deal with convex quadratic optimization problems. Our analysis show linear convergence and A-orthogonality of the gradient iterates. Numerical experimentation show the eectiveness of our method.

Downloads

Não há dados estatísticos.

Biografia do Autor

Hugo Lara Urdaneta, Federal University of Santa Catarina, SC.

Department of Control and Automation Engineering and Computing.



Rafael Aleixo, Federal University of Santa Catarina, SC.

Department of Mathematics

Referências

R. Andreani and M. Raydan. Properties of the delayed weighted gradient method. In: Computational Optimization and Applications 78 (2021), pp. 167180. doi: 10.1007/ s10589-020-00232-9.

R. De Asmundis et al. An e cient gradient method using the Yuan steplength. In: Computational Optimization and Applications 59 (2014), pp. 541563. doi: 10.1007/s10589- 014-9669-5.

J. Barzilai and J. M. Borwein. Two-point step size gradient methods. In: IMA Journal of Numerical Analysis 8.1 (1988), pp. 141148. doi: 10.1093/imanum/8.1.141.

A. L. Cauchy. Méthode générale pour la résolution des systemes d'équations simultanées. In: Comptes Rendus Sci. Paris 25 (1847), pp. 536538.

Y. H. Dai, Y. Huang, and X. W. Liu. A family of spectral gradient methods for optimization. In: Computational Optimization and Applications 74 (2019), pp. 43 65. doi: 10.1007/ s10589-019-00107-8.

T. A. Davis and Y. Hu. The University of Florida Sparse Matrix Collection. In: ACM Trans. Math. Softw. 38.1 (2011). doi: 10.1145/2049662.2049663.

D. di Serano et al. On the steplength selection in gradient methods for unconstrained optimization. In: Applied Mathematics and Computation 318 (2018), pp. 176195. doi: 10.1016/j.amc.2017.07.037.

E. D. Dolan and J. J. Moré. Benchmarking optimization software with performance proles. In: Mathematical Programming 91 (2002), pp. 2012013. doi: 10.1007/s101070100263.

R. Fletcher. A limited memory steepest descent method. In: Mathematical Programming 135 (2012), pp. 413436. doi: 10.1007/s10107-011-0479-6.

G. Frassoldati, L. Zanni, and G. Zanghirati. New adaptive stepsize selections in gradient methods. In: Journal of Industrial & Management Optimization 4.2 (2008), pp. 299 312. doi: 10.3934/jimo.2008.4.299.

A. Friedlander et al. Gradient method with retards and generalizations. In: SIAM Journal on Numerical Analysis 36.1 (1999), pp. 275289. doi: 10.1137/S003614299427315X.

Y. Huang et al. Gradient methods exploiting spectral properties. In: Optimization Methods and Software 35.4 (2020), pp. 681705. doi: 10.1080/10556788.2020.1727476.

H. F. Oviedo-Leon. A delayed weighted gradient method for strictly convex quadratic minimization. In: Computational Optimization and Applications 74 (2019), pp. 729746. doi: 10.1007/s10589-019-00125-6.

T. Serani, G. Zanghirati, and L. Zanni. Gradient projection methods for large quadratic programs and applications in training support vector machines. In: Optimization Methods and Software 20 (2005), pp. 353378. doi: 10.1080/10556780512331318182.

Downloads

Publicado

2022-12-08

Edição

Seção

Trabalhos Completos