Model Building
Model specification in NoLimits.jl uses a composable domain-specific language (DSL) centered on the @Model macro. The key design principle is that mechanistic structure, hierarchical variability, and learned nonlinear components can all be expressed and combined within a single, coherent model definition.
All examples in this documentation section use nonlinear models. When random effects are present, they enter the model nonlinearly – reflecting the typical use case in applied longitudinal analysis.
Overview
This page introduces the main composition patterns available in the modeling DSL. Detailed documentation for each block macro is provided on the dedicated sub-pages listed at the bottom of this page.
What Can Be Composed
The @Model DSL supports flexible combinations of:
- Multiple outcomes in a single model, including mixed outcome types (e.g., a continuous measurement and a count variable jointly).
- Multiple random-effect grouping structures – for example, subject-level and site-level variability in the same specification.
- Covariates at different temporal resolutions – time-varying, group-constant, and interpolated covariates can coexist.
- Learned function approximators such as neural networks and soft decision trees, freely combined with known mechanistic terms.
- ODE and non-ODE workflows sharing the same specification language.
The following examples illustrate representative composition patterns.
Example: Multiple Outcomes and Grouping Structures
This model defines two outcomes (y1, y2) with three random effects spanning two grouping levels (:ID and :SITE), together with constant covariates and nonlinear transformations:
model = @Model begin
@fixedEffects begin
a = RealNumber(0.2)
b = RealNumber(0.1)
s1 = RealNumber(0.3, scale=:log)
σ2 = RealNumber(0.4, scale=:log)
μ_re = RealVector([0.0, 0.0])
Ω_re = RealPSDMatrix([0.6 0.0; 0.0 0.9], scale=:cholesky)
end
@covariates begin
t = Covariate()
x = ConstantCovariateVector([:Age, :BMI]; constant_on=[:ID, :SITE])
end
@randomEffects begin
η_id = RandomEffect(TDist(6.0); column=:ID)
η_mv = RandomEffect(MvNormal(μ_re, Ω_re); column=:ID)
η_site = RandomEffect(SkewNormal(0.0, 1.0, 0.8); column=:SITE)
end
@formulas begin
μ = a + b * x.Age^2 + log1p(x.BMI^2) + tanh(η_id) + η_mv[1]^2 + tanh(η_mv[2]) + η_site^2
y1 ~ LogNormal(μ, s1)
y2 ~ Gamma(abs(μ) + 1e-6, σ2)
end
endExample: Learned Components in Random-Effect Distributions
Neural networks and soft decision trees can parameterize random-effect distributions, enabling covariate-dependent heterogeneity structures that are difficult to express in conventional NLME software:
chain = Chain(Dense(2, 4, tanh), Dense(4, 1))
model = @Model begin
@fixedEffects begin
μ0 = RealNumber(0.2)
ση = RealNumber(0.7, scale=:log)
sy = RealNumber(0.2, scale=:log)
ζ = NNParameters(chain; function_name=:NN1, calculate_se=false)
Γ = SoftTreeParameters(2, 2; function_name=:ST1, calculate_se=false)
end
@covariates begin
x = ConstantCovariateVector([:Age, :BMI]; constant_on=:ID)
end
@randomEffects begin
η = RandomEffect(
LogNormal(μ0 + NN1([x.Age, x.BMI], ζ)[1] + ST1([x.Age, x.BMI], Γ)[1], ση);
column=:ID
)
end
@formulas begin
y ~ Exponential(log1p(η^2) + sy)
end
endExample: Hidden Markov Observation Model
For data generated by a latent switching process, hidden Markov outcome models can be specified directly in @formulas:
model = @Model begin
@fixedEffects begin
β = RealNumber(0.4)
end
@randomEffects begin
η_id = RandomEffect(Gamma(2.0, 1.0); column=:ID)
end
@formulas begin
p11 = 1 / (1 + exp(-(β + η_id^2)))
p22 = 1 / (1 + exp(-(β^2 + log1p(η_id^2))))
P = [p11 1 - p11; 1 - p22 p22]
y ~ DiscreteTimeDiscreteStatesHMM(
P,
(Laplace(-0.5, 0.4), LogNormal(0.1, 0.6)),
Categorical([0.5, 0.5]),
)
end
endBlock Reference
The following sub-pages document each model block in detail:
@Model– Top-level macro and validation rules@helpers– Reusable helper functions@fixedEffects– Population-level parameter definitions@covariates– Covariate types and interpolation@randomEffects– Random-effect distributions and grouping@preDifferentialEquation– Time-constant derived quantities@DifferentialEquation– ODE dynamics and derived signals@initialDE– Initial conditions for ODE states@formulas– Observation models and deterministic nodes- Function Approximators (NNs + SoftTrees) – Neural networks and soft decision trees