Dynamic Programming Principle and Hamilton-Jacobi Equation for Optimal Control Problems with Uncertainty

Authors

  • Oscar A. Sierra Fonseca Escola de Matemática Aplicada da Fundação Getulio Vargas (EMAp-FGV)
  • M. Soledad Aronna Escola de Matemática Aplicada da Fundação Getulio Vargas (EMAp-FGV)
  • Michele Palladino University of L’Aquila

Keywords:

Dynamic Programming, Hamilton-Jacobi Equation, Optimal Control, Uncertainty, Hilbert Space

Abstract

This work establishes that the value function for Mayer’s problem in a Hilbert space is the unique lower semi-continuous solution to the Hamilton-Jacobi-Bellman (HJB) equation under specific conditions. By investigating a parametrized Riemann–Stieltjes problem, we achieve the compactness of its trajectories, which, combined with a characterization of the lower semicontinuity of the associated value function, establishes the existence of optimal controls. Subsequently, utilizing the differential inclusion approach and prior results, we prove the uniqueness of the solution to the HJB equation.

Downloads

Download data is not yet available.

References

T. Donchev. “Properties of the reachable set of control systems”. Em: Systems & Control Letters - SCL 46 (2002).

H. Frankowska. “A priori estimates for operational differential inclusions”. Em: Journal of Differential Equations 84.1 (1990), pp. 100–128.

X. Li e J. Yong. Optimal Control Theory for Infinite Dimensional Systems. Systems & Control: Foundations & Applications. Birkhäuser Boston, 2012.

Downloads

Published

2025-01-20