This framework provides a collection of test problems in Python.
The main features are:
- Most important multi-objective test function is one place
- Vectorized evaluation by using numpy matrices (no for loops)
- Gradients and Hessian matrices are available through automatic differentiation
- Easily new problems can be created using custom classes or functions
The test problems are uploaded to the PyPi Repository.
pip install pymop
For the current development version:
git clone https://github.com/msu-coinlab/pymop cd pymop python setup.py install
In this package single- as well as multi-objective test problems are included:
A problem can be evaluate by providing an input array. Also, the constraints can be returned if necessary. Dependent on the algorithm it can be (1) only the function value (F) returned (2) F and constraint violation (CV) or (3) F, CV and the constraints (G) itself. In this framework, a constraint is violated if the G > 0. Otherwise, the solution is treated as feasible. In case the problem does not have any constraints, but return_constraint_violation is set to true, a constraint violation of zero for each entry is returned.
import numpy as np from pymop.problems.zdt import ZDT1 problem = ZDT1(n_var=10) # if the function does not have any constraints only function values are returned F = problem.evaluate(np.random.random(10)) # in case more than one solution should be evaluated you can provide a matrix F = problem.evaluate(np.random.random((100, 10))) from pymop.problems.welded_beam import WeldedBeam problem = WeldedBeam() # by default a problem with constrained will also return the constraint violation F, CV = problem.evaluate(np.random.random((100, 4))) # if only specific values are required return_values_of can be defined F = problem.evaluate(np.random.random((100, 4)), return_values_of=["F"]) # in this case more values are returned (also the gradient of the objective values!) F, G, CV, dF = problem.evaluate(np.random.random((100, 4)), return_values_of=["F", "G", "CV", "dF"])
Problem by String¶
For more convenience all the test problems can be loaded simply by using a string and passing some additional parameters if necessary.
from pymop.factory import get_problem # create a simple test problem from string p = get_problem("Ackley") # the input name is not case sensitive p = get_problem("ackley") # also input parameter can be provided directly p = get_problem("dtlz1", n_var=20, n_obj=5)
Moreover, you can define your custom problem:
import autograd.numpy as anp from pymop.problem import Problem # always derive from the main problem for the evaluation class MyProblem(Problem): def __init__(self, const_1=5, const_2=0.1): # define lower and upper bounds - 1d array with length equal to number of variable xl = -5 * anp.ones(10) xu = 5 * anp.ones(10) super().__init__(n_var=10, n_obj=1, n_constr=2, xl=xl, xu=xu, evaluation_of="auto") # store custom variables needed for evaluation self.const_1 = const_1 self.const_2 = const_2 # implemented the function evaluation function - the arrays to fill are provided directly def _evaluate(self, x, out, *args, **kwargs): # define an objective function to be evaluated using var1 f = anp.sum(anp.power(x, 2) - self.const_1 * anp.cos(2 * anp.pi * x), axis=1) # !!! only if a constraint value is positive it is violated !!! # set the constraint that x1 + x2 > var2 g1 = (x[:, 0] + x[:, 1]) - self.const_2 # set the constraint that x3 + x4 < var2 g2 = self.const_2 - (x[:, 2] + x[:, 3]) out["F"] = f out["G"] = anp.column_stack([g1, g2]) problem = MyProblem() F, G, CV, feasible, dF, dG = problem.evaluate(anp.random.rand(100, 10), return_values_of=["F", "G", "CV", "feasible", "dF", "dG"])
Problem by Function¶
Also, a problem can be generated by providing the evaluation function and variable boundaries. The number of objectives and constraints are directly set through calling the function once.
import numpy as np # this will be the evaluation function that is called each time from pymop.factory import get_problem_from_func def my_evaluate_func(x, out, *args, **kwargs): # define the objective as x^2 f1 = np.sum(np.square(x - 2), axis=1) f2 = np.sum(np.square(x + 2), axis=1) out["F"] = np.column_stack([f1, f2]) # x^2 < 2 constraint out["G"] = np.sum(np.square(x - 1), axis=1) # load the problem from a function - define 3 variables with the same lower bound problem = get_problem_from_func(my_evaluate_func, -10, 10, n_var=3) F, CV = problem.evaluate(np.random.rand(100, 3)) # or define a problem with varying lower and upper bounds problem = get_problem_from_func(my_evaluate_func, np.array([-10, -5, -10]), np.array([10, 5, 10])) F, CV = problem.evaluate(np.random.rand(100, 3))
Also, for most of the problems the optimal pareto front is stored, or can be generated dynamically.
from pymop.factory import get_problem, get_uniform_weights # for some problems the pareto front does not need any parameters pf = get_problem("tnk").pareto_front() pf = get_problem("osy").pareto_front() # for other problems the number of non-dominated points can be defined pf = get_problem("zdt1").pareto_front(n_pareto_points=100) # for DTLZ for example the reference direction should be provided, because the pareto front for the # specific problem will depend on the factory for the reference lines ref_dirs = get_uniform_weights(100, 3) pf = get_problem("dtlz1", n_var=7, n_obj=3).pareto_front(ref_dirs)
Feel free to contact me if you have any question:
- Gradient and Hessian information for all problems are available using autograd.
- For each evaluation the list of returned values can be defined
- cdtlz and ctp problems were added.
- Introduced a variable type to define a problem more precisely
- Simplified the problem definition by using super() in constructor
- Improved the documentation and usages
- We modified the global interface for the evaluation function using args and kwargs
- The Pareto-fronts are now dependend on parameters and not attributes in the class
- First official release providing a bunch of test problems
- Some redesign of classes compared to early versions
- Added truss_2d problem
|||E. Zitzler, K. Deb, and L. Thiele. Comparison of Multiobjective Evolutionary Algorithms: Empirical Results. Evolutionary Computation, 8(2):173–195, 2000.|
|||K. Deb, L. Thiele, M. Laumanns, and E. Zitzler. Scalable multi-objective optimization tests problems. In Evolutionary Computation, 2002. CEC ‘02. Proceedings of the 2002 Congress on, volume 1, 825–830. May 2002.|
|||Himanshu Jain and Kalyanmoy Deb. An evolutionary many-objective optimization algorithm using reference-point based nondominated sorting approach, part II: handling constraints and extending to an adaptive approach. IEEE Trans. Evolutionary Computation, 18(4):602–622, 2014. URL: https://doi.org/10.1109/TEVC.2013.2281534, doi:10.1109/TEVC.2013.2281534.|
|||Kalyanmoy Deb, Amrit Pratap, and T. Meyarivan. Constrained test problems for multi-objective evolutionary optimization. In Proceedings of the First International Conference on Evolutionary Multi-Criterion Optimization, EMO ‘01, 284–298. London, UK, UK, 2001. Springer-Verlag. URL: http://dl.acm.org/citation.cfm?id=647889.736526.|
|||S. Huband, P. Hingston, L. Barone, and L. While. A review of multiobjective tests problems and a scalable tests problem toolkit. IEEE Transactions on Evolutionary Computation, 10(5):477–506, Oct 2006.|