|
| 1 | +# Solving Dynamic Optimization Problems |
| 2 | + |
| 3 | +Systems in ModelingToolkit.jl can be directly converted to dynamic optimization or optimal control problems. In such systems, one has one or more input variables that are externally controlled to control the dynamics of the system. A dynamic optimization solves for the optimal time trajectory of the input variables in order to maximize or minimize a desired objective function. For example, a car driver might like to know how to step on the accelerator if the goal is to finish a race while using the least gas. |
| 4 | + |
| 5 | +To begin, let us take a rocket launch example. The input variable here is the thrust exerted by the engine. The rocket state is described by its current height, mass, and velocity. The mass decreases as the rocket loses fuel while thrusting. |
| 6 | + |
| 7 | +```@example dynamic_opt |
| 8 | +using ModelingToolkit |
| 9 | +t = ModelingToolkit.t_nounits |
| 10 | +D = ModelingToolkit.D_nounits |
| 11 | +
|
| 12 | +@parameters h_c m₀ h₀ g₀ D_c c Tₘ m_c |
| 13 | +@variables begin |
| 14 | + h(..) |
| 15 | + v(..) |
| 16 | + m(..), [bounds = (m_c, 1)] |
| 17 | + T(..), [input = true, bounds = (0, Tₘ)] |
| 18 | +end |
| 19 | +
|
| 20 | +drag(h, v) = D_c * v^2 * exp(-h_c * (h - h₀) / h₀) |
| 21 | +gravity(h) = g₀ * (h₀ / h) |
| 22 | +
|
| 23 | +eqs = [D(h(t)) ~ v(t), |
| 24 | + D(v(t)) ~ (T(t) - drag(h(t), v(t))) / m(t) - gravity(h(t)), |
| 25 | + D(m(t)) ~ -T(t) / c] |
| 26 | +
|
| 27 | +(ts, te) = (0.0, 0.2) |
| 28 | +costs = [-h(te)] |
| 29 | +cons = [T(te) ~ 0, m(te) ~ m_c] |
| 30 | +
|
| 31 | +@named rocket = System(eqs, t; costs, constraints = cons) |
| 32 | +rocket = mtkcompile(rocket, inputs = [T(t)]) |
| 33 | +
|
| 34 | +u0map = [h(t) => h₀, m(t) => m₀, v(t) => 0] |
| 35 | +pmap = [ |
| 36 | + g₀ => 1, m₀ => 1.0, h_c => 500, c => 0.5 * √(g₀ * h₀), D_c => 0.5 * 620 * m₀ / g₀, |
| 37 | + Tₘ => 3.5 * g₀ * m₀, T(t) => 0.0, h₀ => 1, m_c => 0.6] |
| 38 | +``` |
| 39 | + |
| 40 | +What we would like to optimize here is the final height of the rocket. We do this by providing a vector of expressions corresponding to the costs. By default, the sense of the optimization is to minimize the provided cost. So to maximize the rocket height at the final time, we write `-h(te)` as the cost. |
| 41 | + |
| 42 | +Now we can construct a problem and solve it. Let us use JuMP as our backend here. Note that the package trigger is actually [InfiniteOpt](https://infiniteopt.github.io/InfiniteOpt.jl/stable/), and not JuMP - this package includes JuMP but is designed for optimization on function spaces. Additionally we need to load the solver package - we will use [Ipopt](https://github.com/jump-dev/Ipopt.jl) here (a good choice in general). |
| 43 | + |
| 44 | +Here we have also loaded DiffEqDevTools because we will need to construct the ODE tableau. This is only needed if one desires a custom ODE tableau for the collocation - by default the solver will use RadauIIA5. |
| 45 | + |
| 46 | +```@example dynamic_opt |
| 47 | +using InfiniteOpt, Ipopt, DiffEqDevTools |
| 48 | +jprob = JuMPDynamicOptProblem(rocket, [u0map; pmap], (ts, te); dt = 0.001) |
| 49 | +jsol = solve(jprob, JuMPCollocation(Ipopt.Optimizer, constructRadauIIA5())); |
| 50 | +``` |
| 51 | + |
| 52 | +The solution has three fields: `jsol.sol` is the ODE solution for the states, `jsol.input_sol` is the ODE solution for the inputs, and `jsol.model` is the wrapped model that we can use to query things like objective and constraint residuals. |
| 53 | + |
| 54 | +Let's plot the final solution and the controller here: |
| 55 | + |
| 56 | +```@example dynamic_opt |
| 57 | +using CairoMakie |
| 58 | +fig = Figure(resolution = (800, 400)) |
| 59 | +ax1 = Axis(fig[1, 1], title = "Rocket trajectory", xlabel = "Time") |
| 60 | +ax2 = Axis(fig[1, 2], title = "Control trajectory", xlabel = "Time") |
| 61 | +
|
| 62 | +for u in unknowns(rocket) |
| 63 | + lines!(ax1, jsol.sol.t, jsol.sol[u], label = string(u)) |
| 64 | +end |
| 65 | +lines!(ax2, jsol.input_sol, label = "Thrust") |
| 66 | +axislegend(ax1) |
| 67 | +axislegend(ax2) |
| 68 | +fig |
| 69 | +``` |
| 70 | + |
| 71 | +### Free final time problems |
| 72 | + |
| 73 | +There are additionally a class of dynamic optimization problems where we would like to know how to control our system to achieve something in the least time. Such problems are called free final time problems, since the final time is unknown. To model these problems in ModelingToolkit, we declare the final time as a parameter. |
| 74 | + |
| 75 | +Below we have a model system called the double integrator. We control the acceleration of a block in order to reach a desired destination in the least time. |
| 76 | + |
| 77 | +```@example dynamic_opt |
| 78 | +@variables begin |
| 79 | + x(..) |
| 80 | + v(..) |
| 81 | + u(..), [bounds = (-1.0, 1.0), input = true] |
| 82 | +end |
| 83 | +
|
| 84 | +@parameters tf |
| 85 | +
|
| 86 | +constr = [v(tf) ~ 0, x(tf) ~ 0] |
| 87 | +cost = [tf] # Minimize time |
| 88 | +
|
| 89 | +@named block = System( |
| 90 | + [D(x(t)) ~ v(t), D(v(t)) ~ u(t)], t; costs = cost, constraints = constr) |
| 91 | +
|
| 92 | +block = mtkcompile(block; inputs = [u(t)]) |
| 93 | +
|
| 94 | +u0map = [x(t) => 1.0, v(t) => 0.0] |
| 95 | +tspan = (0.0, tf) |
| 96 | +parammap = [u(t) => 0.0, tf => 1.0] |
| 97 | +``` |
| 98 | + |
| 99 | +The `tf` mapping in the parameter map is treated as an initial guess. |
| 100 | + |
| 101 | +Please note that, at the moment, free final time problems cannot support constraints defined at definite time values, like `x(3) ~ 2`. |
| 102 | + |
| 103 | +!!! warning |
| 104 | + |
| 105 | + The Pyomo collocation methods (LagrangeRadau, LagrangeLegendre) currently are bugged for free final time problems. Strongly suggest using BackwardEuler() for such problems when using Pyomo as the backend. |
| 106 | + |
| 107 | +When declaring the problem in this case we need to provide the number of steps, since dt can't be known in advanced. Let's solve plot our final solution and the controller for the block, using InfiniteOpt as the backend: |
| 108 | + |
| 109 | +```@example dynamic_opt |
| 110 | +iprob = InfiniteOptDynamicOptProblem(block, [u0map; parammap], tspan; steps = 100) |
| 111 | +isol = solve(iprob, InfiniteOptCollocation(Ipopt.Optimizer)); |
| 112 | +``` |
| 113 | + |
| 114 | +Let's plot the final solution and the controller here: |
| 115 | + |
| 116 | +```@example dynamic_opt |
| 117 | +fig = Figure(resolution = (800, 400)) |
| 118 | +ax1 = Axis(fig[1, 1], title = "Block trajectory", xlabel = "Time") |
| 119 | +ax2 = Axis(fig[1, 2], title = "Control trajectory", xlabel = "Time") |
| 120 | +
|
| 121 | +for u in unknowns(block) |
| 122 | + lines!(ax1, isol.sol.t, isol.sol[u], label = string(u)) |
| 123 | +end |
| 124 | +lines!(ax2, isol.input_sol, label = "Acceleration") |
| 125 | +axislegend(ax1) |
| 126 | +axislegend(ax2) |
| 127 | +fig |
| 128 | +``` |
0 commit comments