Constraints and Penalties#

Constrained Optimization Problems#

In mathematical optimization, a constraint is a condition that the solution must satisfy. For example, the following problem is a constrained optimization problem.

\[\begin{split} \begin{aligned} \text{Minimize} & \quad f(x) \\ \text{subject to} & \quad g(x) = 0 \end{aligned} \end{split}\]

Here, \(f\) and \(g\) are functions of the decision variable \(x\). The condition \(g(x) = 0\) is called an equality constraint. The set of all \(x\) that satisfy \(g(x) = 0\) is called the feasible set. Constraints can also be inequality constraints like \(g(x) \leq 0\). For example, the following problem is also a constrained optimization problem.

\[\begin{split} \begin{aligned} \text{Minimize} & \quad f(x) \\ \text{subject to} & \quad g(x) \leq 0 \end{aligned} \end{split}\]

In jijmodeling, both equality and inequality constraints can be described using the Constraint class. For example, the equality constraint \(\sum_i x_i = 1\) can be expressed as follows.

import jijmodeling as jm

N = jm.Placeholder("N")
x = jm.BinaryVar("x", shape=(N,))
n = jm.Element('n', belong_to=(0, N))

jm.Constraint("onehot", jm.sum(n, x[n]) == 1)
\[\begin{split}\begin{array}{cccc} & \text{onehot} & \displaystyle \sum_{n = 0}^{N - 1} x_{n} = 1 & \\ \end{array}\end{split}\]

Note that in the above code, the string “onehot” is specified as the first argument of jm.Constraint. Constraint objects have a name and a constraint expression. These names are used to check whether the constraints are satisfied. The constraint expression must be a logical expression using one of the three comparison operators ==, <=, or >=. Multiple constraints can be imposed on a single problem as follows.

x = jm.BinaryVar("x", shape=(4,))
problem = jm.Problem("constraint_sample")
problem += jm.Constraint("c01", x[0] + x[1] <= 1)
problem += jm.Constraint("c12", x[1] + x[2] <= 1)
problem += jm.Constraint("c23", x[2] + x[3] >= 0)

Tip

Other comparison operators (e.g., >) and logical operators are not supported.

x = jm.BinaryVar("x", shape=(4,))
jm.Constraint("unsupported", (x[0] + x[1] <= 1) | (x[1] + x[2] <= 1))

forall Constraints#

Constraints are often indexed by variables. For example, the following problem is a constrained optimization problem.

\[\begin{split} \begin{aligned} \text{Minimize} & \quad \sum_{i=0}^{N-1} \sum_{j=0}^{M-1} a_{ij} x_{ij} \\ \text{subject to} & \quad \sum_{j = 0}^{M - 1} x_{ij} = 1 \quad \forall i \in \left\{0, \ldots, N - 1\right\} \end{aligned} \end{split}\]

To express such \(\forall i \in \left\{0, \ldots, N - 1\right\}\), the Constraint object has a forall option. For example, the above problem can be expressed as follows.

N = jm.Placeholder("N")
M = jm.Placeholder("M")
a = jm.Placeholder("a", ndim=2)
x = jm.BinaryVar("x", shape=(N, M))
i = jm.Element('i', belong_to=(0, N))
j = jm.Element('j', belong_to=(0, M))

problem = jm.Problem("forall_sample")
problem += jm.sum([i, j], a[i, j] * x[i, j])
problem += jm.Constraint("onehot", jm.sum(j, x[i, j]) == 1, forall=i)

What is a Penalty?#

Penalty methods and Lagrange multipliers are the most common methods for converting constrained optimization problems into unconstrained optimization problems. Here, we will look at the penalty method.

\[\begin{split} \begin{aligned} \text{Minimize} & \quad f(x) \\ \text{subject to} & \quad g(x) = 0 \end{aligned} \end{split}\]

This problem is converted into the following unconstrained optimization problem.

\[ \text{Minimize} \quad f(x) + \alpha p(x), \]

In this conversion, \(\alpha\) (penalty coefficient or Lagrange multiplier) and \(p(x)\) (penalty term) play important roles. Typically, \(p(x)\) is defined as \(p(x) = g(x)^2\). If the minimum value of \(f(x) + \alpha p(x)\) satisfies \(p(x) = 0\), then that \(x\) is the minimum value of the original constrained optimization problem. If the penalty \(p(x)\) is positive, increasing the value of the penalty coefficient \(\alpha\) and solving the above unconstrained optimization problem increases the likelihood of obtaining the solution to the original optimization problem.

Some solvers only accept unconstrained optimization problems. The “U” in QUBO stands for “Unconstrained”. To input a mathematical model formulated as a constrained optimization problem in jijmodeling into a solver with a QUBO input format, it is necessary to convert it into an unconstrained optimization problem using jijmodeling_transpiler or JijZept. However, in actual optimization problems, the definition of \(p\) is also important, so jijmodeling provides a feature to customize the penalty term.

Converting Constraints to Penalty Terms#

jijmodeling does not have the function to convert constraints into penalty terms. Here, we will explain how jijmodeling_transpiler converts constraints into penalty terms. Consider a simple problem like the following.

\[\begin{split} \begin{aligned} \text{Minimize} & \quad \sum_{i=0}^{N-1} a_i x_i \\ \text{subject to} & \quad \sum_{i = 0}^{N - 1} x_i = 1 \end{aligned} \end{split}\]

This problem can be formulated in jijmodeling as follows.

import jijmodeling as jm

a = jm.Placeholder("a", ndim=1)
N = a.len_at(0, latex="N")

i = jm.Element('i', belong_to=(0, N))
x = jm.BinaryVar('x', shape=(N,))

problem = jm.Problem('translate_constraint_to_penalty')
problem += jm.sum(i, a[i]*x[i])
problem += jm.Constraint(
    'onehot',
    jm.sum(i, x[i]) == 1,
)
problem
\[\begin{split}\begin{array}{cccc}\text{Problem:} & \text{translate\_constraint\_to\_penalty} & & \\& & \min \quad \displaystyle \sum_{i = 0}^{N - 1} a_{i} \cdot x_{i} & \\\text{{s.t.}} & & & \\ & \text{onehot} & \displaystyle \sum_{i = 0}^{N - 1} x_{i} = 1 & \\\text{{where}} & & & \\& x & 1\text{-dim binary variable}\\\end{array}\end{split}\]

In jijmodeling_transpiler, this constrained optimization problem is converted into the following unconstrained optimization problem.

\[ \text{Minimize} \quad \sum_{i=0}^{N-1} a_i x_i + \alpha \left(\sum_{i = 0}^{N - 1} x_i - 1\right)^2 \]

Here, let’s consider the case where \(a = [1, 2]\) and \(\alpha = 5\).

import jijmodeling_transpiler as jmt

instance_data = {
    "a": [1, 2],
}

compiled_model = jmt.core.compile_model(problem, instance_data)
pubo_builder = jmt.core.pubo.transpile_to_pubo(
    compiled_model,
    normalize=False  # Disable normalization for simplicity
)
qubo, constant = pubo_builder.get_qubo_dict(multipliers={ 'onehot': 5 })

The result is as follows.

qubo
{(0, 0): -4.0, (1, 1): -3.0, (0, 1): 10.0}
constant
5.0

The reason why such qubo and constant are obtained is because the following calculations were performed.

\[\begin{split} \begin{aligned} \sum_{i=0}^{N-1} a_i x_i + \alpha \left(\sum_{i = 0}^{N - 1} x_i - 1\right)^2 &= x_1 + 2 x_2 + 5 (x_1 + x_2 - 1)^2 \\ &= -4 x_1 - 3 x_2 + 10 x_1 x_2 + 5 \\ &= \begin{bmatrix} x_1 & x_2 \end{bmatrix} \begin{bmatrix} -4 & 10 \\ 0 & -3 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} + 5 \end{aligned} \end{split}\]

The above calculation uses the fact that the binary variable \(x_i\) satisfies \(x_i^2 = x_i\).

The conversion process of jijmodeling_transpiler is divided into two phases.

  • Convert to CompiledInstance object using Problem object and instance_data.

  • Convert CompiledInstance object to QUBO by specifying multipliers.

Note

jijmodeling_transpiler allows you to set the multipliers for each penalty term using the multipliers argument of the get_qubo_dict method. Additionally, jijmodeling_transpiler supports other relaxation methods such as the Augmented Lagrangian method. For more details, refer to the jijmodeling_transpiler reference.

CustomPenaltyTerm#

The task of converting constraints into penalty terms is the role of jijmodeling_transpiler or various services of JijZept, but there may be cases where you want to set an original penalty term as needed. In jijmodeling, you can customize the penalty term using CustomPenaltyTerm. Here, we will explain how to define the same penalty term as in the previous example using CustomPenaltyTerm.

\[ \text{Minimize} \quad \sum_{i=0}^{N-1} a_i x_i + \alpha \left(\sum_{i = 0}^{N - 1} x_i - 1\right)^2 \]

This problem can be expressed using CustomPenaltyTerm as follows.

import jijmodeling as jm

a = jm.Placeholder("a", ndim=1)
N = a.len_at(0, latex="N")

i = jm.Element('i', belong_to=(0, N))
x = jm.BinaryVar('x', shape=(N,))

problem = jm.Problem('penalty_sample')
problem += jm.sum(i, a[i]*x[i])
problem += jm.CustomPenaltyTerm(
    'onehot',
    (jm.sum(i, x[i]) - 1)**2,
)

This point does not yet set the multiplier \(\alpha\). In this case, you can still use jijmodeling_transpiler as before.

import jijmodeling_transpiler as jmt

instance_data = {
    "a": [1, 2],
}

compiled_model = jmt.core.compile_model(problem, instance_data)
pubo_builder = jmt.core.pubo.transpile_to_pubo(
    compiled_model,
    normalize=False  # Disable normalization for simplicity
)
qubo, constant = pubo_builder.get_qubo_dict(multipliers={ 'onehot': 5 })

Of course, the result is the same.

qubo
{(0, 0): -4.0, (1, 1): -3.0, (0, 1): 10.0}
constant
5.0

Also, CustomPenaltyTerm has a forall argument.

import jijmodeling as jm

a = jm.Placeholder("a", ndim=2)
N = a.len_at(0, latex="N")
M = a.len_at(1, latex="M")

i = jm.Element('i', belong_to=(0, N))
j = jm.Element('j', belong_to=(0, M))
x = jm.BinaryVar('x', shape=(N, M))

problem = jm.Problem('forall_penalty_sample')
problem += jm.sum([i, j], a[i, j]*x[i, j])
problem += jm.CustomPenaltyTerm(
    'onehot',
    (jm.sum(j, x[i, j]) - 1)**2,
    forall=i
)
problem
\[\begin{split}\begin{array}{cccc}\text{Problem:} & \text{forall\_penalty\_sample} & & \\& & \min \quad \displaystyle \sum_{i = 0}^{N - 1} \sum_{j = 0}^{M - 1} a_{i, j} \cdot x_{i, j} & \\\text{{penalty terms}} & & & \\ & \text{onehot} & \displaystyle \left(\left(\sum_{j = 0}^{M - 1} x_{i, j} - 1\right)^{2}\right) & \forall i \in \left\{0,\ldots,N - 1\right\} \\\text{{where}} & & & \\& x & 2\text{-dim binary variable}\\\end{array}\end{split}\]