Often, the dynamics of the model should be in a steady state at the beginning of a simulation. This reflects the biological situation of cells being in a resting (i.e. unstimulated) state. D2D offers three approaches to deal with steady states; each of which have their own advantages and drawbacks:
Analytical steady states: Analytical steady states via parameter transformations in the CONDITIONS field
Advantages: No simulation required. Reduction in the number of parameters.
Disadvantages: Only tractable for relatively simple steady states. Up to the user to correctly determine the steady state equations
Pre-equilibration based steady states: Steady state imposition by equilibrating the system to steady state.
Advantages: No additional term in optimization. Reduction in the number of parameters.
Disadvantages: Pre-simulation required. Integration failure when steady state does not exist.
This method is described here
Constraint based steady states: Steady state imposition by means of penalized optimization.
Advantages: Optimized initial condition parameters correspond to a (near) steady state and it is also possible to consider systems 'near' steady state.
Disadvantages: Constraint is included in profiling and optimization, which can trade off penalty and fidelity to the data. Have to critically evaluate what the parameter profiles really mean when introducing such a constraint. Since constraints are ratio-metric, states near zero are penalized much more strongly in absolute terms.
This method is described on this page
Using constraint based steady states
A steady state constraint can be assigned by invoking:
ar.model(jm).condition(jc).qSteadyState(:) = 1;
for each model with index
jm and experimental condition with index
jc. It is important to figure out for which experimental condition
jc the steady state constraints should be activated. Here the function
arShowDataConditionStructure can help. It is also important to make sure that the specified condition can really lead to a steady state.
The violation of the constraints are then displayed in the output of the
arChi2 function, and are used in the optimization of the model.
The constraints are implemented as soft constraints. The strength of the constraint to be fulfilled is controlled by the field:
ar.model(jm).condition(jc).stdSteadyState(:) = std_val;
It indicates the deviation from zero of the first derivative of the dynamics with respect to time dx/dt at the beginning of the simulation of the respective experimental condition. The smaller
stdSteadyState, here given by
std_val, the stronger the constraint. In principle each constraint component can have an individual
stdSteadyState value, however, this is probably not necessary in most cases.
The remaining question is of course, what a reasonable value for
std_val is. Here, we can give a hint:
Fulfilling the constraint is an artificially imposed, i.e. not measured, requirement to the model. It is reasonable to weight it equally in the optimization than the entirety of experimental data. Also, it is important to consider the typical timescale that is of interest. For instance, the cells might be allowed to change over time very slowly. Here is an example script that selects the
stdSteadyState value accordingly:
% turn on steady state constraint ar.model(1).condition(1).qSteadyState(:) = 1; arChi2; % allow approx. 1e-1 relative deviation from steady state with respect to 200 minutes simulation of dynamics ar.model(1).condition(1).stdSteadyState(:) = 1e-1 / 200; % equally weight constraints versus experimental data ar.model(1).condition(1).stdSteadyState = ar.model(1).condition(1).stdSteadyState * (ar.nconstr/ar.ndata);