moocore: Core Algorithms for Multi-Objective Optimization#
Version: 0.1.9.dev0 (What’s new)
Date Sep 17, 2025
Useful links: Install | Source Repository | Issue Tracker
This webpage documents the moocore
Python package. There is also a moocore R package.
The goal of the moocore project (multi-objective/moocore) is to collect and document fast implementations of core mathematical functions and algorithms for multi-objective optimization and make them available to different programming languages via similar interfaces. These functions include:
Quality metrics such as (weighted) hypervolume, epsilon, IGD+, etc.
Computation of the Empirical Attainment Function. The empirical attainment function (EAF) describes the probabilistic distribution of the outcomes obtained by a stochastic algorithm in the objective space.
Most critical functionality is implemented in C, with the R and Python packages providing convenient interfaces to the C code.
Keywords: empirical attainment function, summary attainment surfaces, EAF differences, multi-objective optimization, bi-objective optimization, performance measures, performance assessment
The reference guide contains a detailed description of the functions, modules, and objects.
Detailed examples and tutorials.
Benchmarks#
The following plots compare the performance of moocore, pymoo, BoTorch, and jMetalPy. Other optimization packages are not included in the comparison because they are based on these packages for the functionality benchmarked, so they are at least as slow as them. For example Xopt and BoFire use BoTorch, pysamoo is an extension of pymoo, DESDEO already uses moocore for hypervolume and other quality metrics, and most of the multi-objective functionality of DEAP is shared by pymoo. We do not compare with the Bayesian optimization toolbox trieste, because it is much slower than BoTorch and too slow to run the benchmarks in a reasonable time.
Not all packages provide the same functionality. For example, pymoo does not provide the epsilon indicator whereas jMetalPy does not provide the IGD+ indicator. BoTorch provides neither of them.
The source code for the benchmarks below can be found at multi-objective/moocore .
Identifying nondominated points#
The following plots compare the speed of finding nondominated solutions, equivalent to moocore.is_nondominated()
, in 2D and 3D. We test both keep_weakly=True
and keep_weakly=False
(the latter is not supported by pymoo nor DESDEO). The plots show that moocore is 10 times faster than DESDEO and 100 times faster than the other packages.
Exact computation of hypervolume#
The following plots compare the speed of computing the hypervolume indicator in 3D, 4D, 5D and 6D. As the plots show, moocore is 100 times faster than the other packages and 1000 times faster than BoTorch and, by extension, Xopt and BoFire. BoTorch is not included for more than 4 objectives because it is tens of thousands of times slower than moocore.
Approximation of the hypervolume#
The following plots compare the accuracy and speed of approximating the hypervolume with the various methods provided by moocore.hv_approx()
. The plots show that method DZ2019-HW
consistently produces the lowest approximation error, but it is also slower than method DZ2019-MC
. When the number of points increases, both methods are significantly faster than pymoo.
If you compare the plots of DTLZLinearShape-3d and DTLZLinearShape-4d below to the ones above in the previous section, you can see that the exact computation of the hypervolume in 3D or 4D for thousands of points takes milliseconds, whereas approximating the hypervolume is significantly slower and, thus, not worth doing.
Approximating the hypervolume becomes more useful for dimensions higher than 5, where the exact computation becomes noticeably slower with hundreds of points. For such problems, method DZ2019-HW
is significantly slower for few points than pymoo, which is much slower than DZ2019-MC
. However, the computation time of DZ2019-HW
increases very slowly with the number of points whereas the computation time of pymoo increases very rapidly.
Epsilon and IGD+ indicators#
The following plots compare the speed of computing the epsilon indicator metric and IGD+ indicator. Although the algorithms for computing these metrics are relatively simple and easy to vectorize in Python, the moocore implementation is still 10 to 100 times faster.