Constraints and Penalties#
Constrained Optimization Problems#
In mathematical optimization, a constraint is a condition that the solution must satisfy. For example, the following problem is a constrained optimization problem.
Here, \(f\) and \(g\) are functions of the decision variable \(x\). The condition \(g(x) = 0\) is called an equality constraint. The set of all \(x\) that satisfy \(g(x) = 0\) is called the feasible set. Constraints can also be inequality constraints like \(g(x) \leq 0\). For example, the following problem is also a constrained optimization problem.
In jijmodeling
, both equality and inequality constraints can be described using the Constraint
class. For example, the equality constraint \(\sum_i x_i = 1\) can be expressed as follows.
import jijmodeling as jm
N = jm.Placeholder("N")
x = jm.BinaryVar("x", shape=(N,))
n = jm.Element('n', belong_to=(0, N))
jm.Constraint("onehot", jm.sum(n, x[n]) == 1)
Note that in the above code, the string “onehot” is specified as the first argument of jm.Constraint
. Constraint objects have a name and a constraint expression. These names are used to check whether the constraints are satisfied. The constraint expression must be a logical expression using one of the three comparison operators ==
, <=
, or >=
. Multiple constraints can be imposed on a single problem as follows.
x = jm.BinaryVar("x", shape=(4,))
problem = jm.Problem("constraint_sample")
problem += jm.Constraint("c01", x[0] + x[1] <= 1)
problem += jm.Constraint("c12", x[1] + x[2] <= 1)
problem += jm.Constraint("c23", x[2] + x[3] >= 0)
Tip
Other comparison operators (e.g., >
) and logical operators are not supported.
x = jm.BinaryVar("x", shape=(4,))
jm.Constraint("unsupported", (x[0] + x[1] <= 1) | (x[1] + x[2] <= 1))
forall
Constraints#
Constraints are often indexed by variables. For example, the following problem is a constrained optimization problem.
To express such \(\forall i \in \left\{0, \ldots, N - 1\right\}\), the Constraint
object has a forall
option. For example, the above problem can be expressed as follows.
N = jm.Placeholder("N")
M = jm.Placeholder("M")
a = jm.Placeholder("a", ndim=2)
x = jm.BinaryVar("x", shape=(N, M))
i = jm.Element('i', belong_to=(0, N))
j = jm.Element('j', belong_to=(0, M))
problem = jm.Problem("forall_sample")
problem += jm.sum([i, j], a[i, j] * x[i, j])
problem += jm.Constraint("onehot", jm.sum(j, x[i, j]) == 1, forall=i)
What is a Penalty?#
Penalty methods and Lagrange multipliers are the most common methods for converting constrained optimization problems into unconstrained optimization problems. Here, we will look at the penalty method.
This problem is converted into the following unconstrained optimization problem.
In this conversion, \(\alpha\) (penalty coefficient or Lagrange multiplier) and \(p(x)\) (penalty term) play important roles. Typically, \(p(x)\) is defined as \(p(x) = g(x)^2\). If the minimum value of \(f(x) + \alpha p(x)\) satisfies \(p(x) = 0\), then that \(x\) is the minimum value of the original constrained optimization problem. If the penalty \(p(x)\) is positive, increasing the value of the penalty coefficient \(\alpha\) and solving the above unconstrained optimization problem increases the likelihood of obtaining the solution to the original optimization problem.
Some solvers only accept unconstrained optimization problems. The “U” in QUBO stands for “Unconstrained”. To input a mathematical model formulated as a constrained optimization problem in jijmodeling
into a solver with a QUBO input format, it is necessary to convert it into an unconstrained optimization problem using OMMX through jijmodeling.Interpreter
.
Converting Constraints to Penalty Terms#
jijmodeling
does not have the function to convert constraints into penalty terms. Here, we will explain how OMMX converts constraints into penalty terms. Consider a simple problem like the following.
This problem can be formulated in jijmodeling
as follows.
import jijmodeling as jm
a = jm.Placeholder("a", ndim=1)
N = a.len_at(0, latex="N")
i = jm.Element('i', belong_to=(0, N))
x = jm.BinaryVar('x', shape=(N,))
problem = jm.Problem('translate_constraint_to_penalty')
problem += jm.sum(i, a[i]*x[i])
problem += jm.Constraint(
'onehot',
jm.sum(i, x[i]) == 1,
)
problem
Through OMMX’s to_qubo
, this constrained optimization problem is converted into the following unconstrained optimization problem.
Here, let’s consider the case where \(a = [1, 2]\) and \(\alpha = 5\).
instance_data = {
"a": [1, 2],
}
interpreter = jm.Interpreter(instance_data)
ommx_instance = interpreter.eval_problem(problem)
qubo, constant = ommx_instance.to_qubo(penalty_weights={0: 5})
The result is as follows.
qubo
{(0, 0): -4.0, (0, 1): 10.0, (1, 1): -3.0}
constant
5.0
The reason why such qubo
and constant
are obtained is because the following calculations were performed.
The above calculation uses the fact that the binary variable \(x_i\) satisfies \(x_i^2 = x_i\).
The conversion process of getting QUBO through OMMX is divided into two phases.
Convert to an
ommx.v1.Instance
object usingProblem
object andinstance_data
.Convert
ommx.v1.Instance
object to QUBO by using theto_qubo
method and specifying multipliers.
Note
For more details on converting to QUBO, refer to the OMMX reference