1. Preliminary Benchmarking Results
  2. TODO: SBML Benchmarking Test Suite
  3. Strategies for SOSlib Performance Optimization

Preliminary Benchmarking Results

Table 1 shows preliminary benchmarking results for five deterministic SBML solvers. It is important to note that these results can not be considered an absolute performance measurement of solvers and their algorithms. It is e.g. unclear how the absolute and relative error tolerances relate to each other in these tools. These results give however a rough comparative overview of solver behavior for different model types. We would encourage a detailed and collaborative benchmarking study, see below.

Table 1: Benchmarking of SBML solvers
CPU 3.4 GHz, RAM 1 GHz. CPU times for ODE construction and numerical integration in milliseconds.
Biomodels DBId 9 14 33 22 repressilator
NEQ / Time 22 / 150 86 / 300 28 / 60 10 / 2000 6 / 10e4
relative / absolute
error tolerance
1e-9 / 1e-4 1e-9 / 1e-4 1e-9 / 1e-4 1e-9 / 1e-4 1e-9 / 1e-14
model dynamics steady steady steady oscil oscil/stiff
Dizzy 1.11.1
15.499 12.711 2.634 19.350 6.369
Jarnac 2.16n 344 14.531 1.157 5.843 4.516
188 920 302 5.554 6.681
Copasi 4.0 Build 15 156 4.062 109 1.437 500
SOSlib 1.6.0pre
from CVS, Nov. 17th 2005
234 515 171 562 1.062

For small models, both the SBMLToolbox for Matlab and Copasi performed in equal ranges. SOSlib, using CVODES' Backward Differentiation Formula (BDF) method, prooved especially fast for the large model 14 (MAPK cascade on scaffold proteins, from Levchenko et.al. 2000) and for the oscillatory model 22 (circadian clock in Drosophila, from Ueda et.al. 2001). While CVODES (and SOSlib) can employ either the BDF for stiff ODE systems or the Adams-Moulton method for non-stiff ODE systems, LSODA (used in Copasi) can switch between these methods during one integration, which might explain the good results of Copasi with a `general model of the repressilator' (Müller et.al. 2005, submitted to Bull.Math.Bio.) in a stiff parameter regime.

TODO: SBML Benchmarking Test Suite

We would encourage a collaborative effort to develop a detailed SBML Benchmarking Test Suite. First, tool authors should especially clarify how the use of absolute and relative error tolerances correlate between tools. It might be possible to adapt the SBML Semantic Test Suite to test precision of results output.
Then a more detailed benchmarking should differentiate between:

  1. type of model dynamics:
    1. oscillatory ot steady state
    2. stiff or non-stiff
    3. low or high-dimensional ODE system
  2. special SBML constructs:
    1. events
    2. delays
    3. algebraic rules
    4. assignment rules
    5. assignment rules
  3. SBML handling:
    1. model parsing and validation
    2. ODE construction
    3. ODE integration
    4. requested number of output times

Please contact us, if you are interested in such a project.

Strategies for SOSlib Performance Optimization

Please also see the file OUTLOOK in the source code distribution. The CVS version is of course the most recent one.

no reference for performance create SBML Benchmarking Test Suite, with defined settings and scripts
BDF/Adams-Moulton, Newton/Functional iteration behaviour differences not exactly known Use a benchmarking test suite to systematically compare
kineticLaw parameters are replaced and can't be used for sensitivity analysis globalize it! pass optional listOfParameters to be filled with globalized kL parameters, and add to ode model
each kinetic law appears as the same formula in many ODEs, while it could also be evaluated only once per function f calls pass optional listOfRules/listOfParameters to odeFromReactions and convert kineticLaws to parameter and rule! Then add the new rules/parameters to the ode model
kinetic law types, encoded as sboTerms in SBML L2V2, as currently discussed by the sbml community, could be evaluated in a hard coded form, which might be faster then our current approach setUserDefinedFunction (in processAST.c) could be used to set hard-coded evaluation for sboTerms encoding certain pre- defined kinetic laws. The ODEs could then either contain directly the sboTerm or this approach could be combined with above suggestion of using `globalized' kinetic laws as assignment rules. These assignment rules could then contain the udf function call.
Constant Parameters need to stay in the formulae, for later setting with IntegratorInstance_setVariable, as well as for arbitrary sensitivity analysis pass a OptimizeFlag to odeConstruct, which causes replacment of all constant parameters and sets a `NoSensitivityAnalysisPossible' flag for the integratorInstance
each IntegratorInstance_setVariableValue frees and creates solver structures, if a ODE variable is set use CvodeReInit in integrateOneStep, when a flag sets the solverstructures invalid, this flag can be set e.g. by IntegratorInstance_setVariableValue
sensitivity analysis behaviour should be interfaced in more detail SENS errors control: separate error values, TRUE/FALSE error control set good pbar values
sensitivity analysis behaviour should be interfaced in more detail SENS errors control: separate error values, TRUE/FALSE error control set good pbar values
SUNDIALS comes with parallel solver routines implement SUNDIALS parallel routines
hardcoded model would always be fastest write a function (based on evaluateAST/differentiateAST) that converts all formulae to C code, write out a complete C file that only needs linking to SUNDIALS for compilation. Compile and run. Eventually we could include TCC (Ben's suggestion) to do compilation and execution at runtime.

Please contact us, if you have further suggestions for performance improvement.


SBML Compliance Logo SourceForge.net Logo OpenSource.org Logo GNU Logo


Please email to admins to report problems or questions with this website.
Last modified: 2005-12-19 14:40:39 raim raim