This is an engaging and well-motivated introduction to the calculus of variations and optimal control theory. It begins with the calculus of variations, a subject not usually part of the undergraduate mathematics curriculum, and uses this necessary background to move on smoothly to optimal control theory.
The main prerequisites are experience solving differential equations and systems of differential equations. An undergraduate course on this subject that specifically includes working with eigenvalues would suffice.
The calculus of variations is presented in the first third of the book. The classic brachistochrone problem provides some initial motivation. Then the Euler-Lagrange equations are introduced, and a clever historical example is presented. (This is the story of Dido using a bull’s hide to trace out the maximum area for the placement of what would become Carthage.) The author also uses this example to introduce how the concept of transversality applies in some optimization problems.
The last two-thirds of the book introduce optimal control theory and include even a bigger variety of illuminating examples. Optimal control is portrayed as an extension of the calculus of variations to incorporate an agent who can exert some control over the dynamics. Both discrete and continuous time are considered but continuous-time optimal control is the focus of the latter part of the book. Discrete-time control problems using the famous dynamic programming method introduced by Bellman are illustrated with the problem of maximizing the reproductive value of different age classes of elephants.
Nanosatellites and several versions of the problem of their optimal control provide a central theme that illustrates, provides motivation, and supports intuition. A nanosatellite is a small satellite typically with a mass less than 10 kilograms. Such a satellite can be used individually or in clusters. Used in a cluster, nanosatellites can be used to cover larger regions of the earth’s surface. For example, a cluster can monitor environmental conditions more broadly or help surface forces fight large forest fires.
With nanosatellites as a focal application, the reader sees optical control theory and controllability questions in several contexts. The author does not intend to investigate the breadth of control theory, but he does address basic issues of controllability and control to an end state or to a target curve. He also touches briefly on singular controls as well as solutions that simultaneously optimize time and energy.
The book emphasizes a “learning by doing” strategy, with problems, straightforward questions, and lab exercises that employ standard differential equations application software or new code written in Python or Mathematica. Solutions are provided in an appendix for questions, problems, and lab exercises.
The author approaches his subject with an appreciation for the historical roots of the calculus of variations and a good sense of how optimal control theory is a natural extension of those roots. This is an introductory look at optimal control theory, one that invites further exploration of the many and varied facets of the subject. It would be an exceptionally good choice for introducing the subject.
Bill Satzer (
bsatzer@gmail.com), now retired from 3M Company, spent most of his career as a mathematician working in industry on a variety of applications. He did his PhD work in dynamical systems and celestial mechanics.