Dynamic Optimization

Dynamic optimization deals with problems where the solution depends on time or space. It is exposed under different angles. First, from a purely mathematical point of view, the problem is solved based on variational calculus. First order Euler conditions are demonstrated, and second order Legendre–Clebsch are mentioned. The same problem is discussed using Hamilton–Jacobi framework. Then, dynamic optimization in continuous time is treated in the framework of optimal control. Successively, Euler’s method, Hamilton–Jacobi, and Pontryagin’s maximum principle are exposed. Several detailed examples accompany the different techniques. Numerical issues with different solutions are explained. The continuous-time part is followed by the discrete-time part, i.e. dynamic programming. Bellman’s theory is explained both by backward and forward induction with clear numerical examples.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic €32.70 /Month

Buy Now

Price includes VAT (France)

eBook EUR 58.84 Price includes VAT (France)

Softcover Book EUR 73.84 Price includes VAT (France)

Hardcover Book EUR 105.49 Price includes VAT (France)

Tax calculation will be finalised at checkout

Purchases are for personal use only

Notes

A functional is a function of functions; the function depends on functions and .
  1. (a) We denote by yz the partial derivative ∂y∂z, where z is a scalar. If y is scalar and a vector, the notation is the gradient vector of partial derivatives ∂y∂zi. If and are vectors, the notation represents the Jacobian matrix of typical element ∂yi∂zj.
  2. (b) The derivative with respect to of the integral with fixed boundaries

is equal to

Other authors use the definition of the Hamiltonian with an opposite sign before the functional, that is,

which changes nothing, as long as we remain at the level of first-order conditions. However, the sign changes in condition (12.4.21). See also the footnote in Section 12.4.6.

In many articles, authors refer to the Minimum Principle, which simply results from the definition of the Hamiltonian H with an opposite sign of the functional. Comparing to definition (12.4.36), they define their Hamiltonian as

With that definition, the optimal control u ∗ minimizes the Hamiltonian.

This notation is that of Pontryaguine et al. (1974). The superscript corresponds to the rank i of the coordinate, while the subscripts (0 and 1) or (0 and f), according to the authors, are reserved for the terminal conditions.

References

Author information

Authors and Affiliations

  1. LRGP-CNRS-ENSIC, University of Lorraine, Nancy, France Jean-Pierre Corriou
  1. Jean-Pierre Corriou