Open
Description
Describe the bug 🐞
Currently, if a system has a constant in it (e.g. @constants g = 9.8
), and the constant is defined as a float64 number, the result of the simulation will also be a float64, regardless of the type of u0
and tspan
. For example:
using ModelingToolkit
using ModelingToolkit: t_nounits as t, D_nounits as D
using OrdinaryDiffEq
@constants g = 9.8
@variables X(t)
eqs = [D(X) ~ g]
@mtkbuild osys = System(eqs, t)
u0 = [X => 1.0f0]
oprob = ODEProblem{false}(osys, u0, 1.0f0)
sol = solve(oprob)
typeof(oprob[X]) # Float64
eltype(sol[X]) # Float64
However, if the constant is a float32, then the result can also be a float32 if everything else is defined properly:
@constants g = 9.8f0
eqs = [D(X) ~ g]
@mtkbuild osys = System(eqs, t)
oprob = ODEProblem(osys, u0, 1.0f0)
sol = solve(oprob)
typeof(oprob[X]) # Float32
eltype(sol[X]) # Float32
This seems problematic for large systems with equations distributed across different packages, because it effectively prevents the precision from being selected by the end user. Although this seems like a problem someone else may have already run into, so let me know if there's just something I'm missing here.
Environment (please complete the following information):
ModelingToolkit v10.2.0