Dynamic Programming Principle and Hamilton-Jacobi Equation for Optimal Control Problems with Uncertainty

Autores

  • Oscar A. Sierra Fonseca Escola de Matemática Aplicada da Fundação Getulio Vargas (EMAp-FGV)
  • M. Soledad Aronna Escola de Matemática Aplicada da Fundação Getulio Vargas (EMAp-FGV)
  • Michele Palladino University of L’Aquila

Palavras-chave:

Dynamic Programming, Hamilton-Jacobi Equation, Optimal Control, Uncertainty, Hilbert Space

Resumo

This work establishes that the value function for Mayer’s problem in a Hilbert space is the unique lower semi-continuous solution to the Hamilton-Jacobi-Bellman (HJB) equation under specific conditions. By investigating a parametrized Riemann–Stieltjes problem, we achieve the compactness of its trajectories, which, combined with a characterization of the lower semicontinuity of the associated value function, establishes the existence of optimal controls. Subsequently, utilizing the differential inclusion approach and prior results, we prove the uniqueness of the solution to the HJB equation.

Downloads

Não há dados estatísticos.

Referências

T. Donchev. “Properties of the reachable set of control systems”. Em: Systems & Control Letters - SCL 46 (2002).

H. Frankowska. “A priori estimates for operational differential inclusions”. Em: Journal of Differential Equations 84.1 (1990), pp. 100–128.

X. Li e J. Yong. Optimal Control Theory for Infinite Dimensional Systems. Systems & Control: Foundations & Applications. Birkhäuser Boston, 2012.

Downloads

Publicado

2025-01-20

Edição

Seção

Resumos