jijzepttools.blackbox_optimization.factorization_machine#

Classes#

FactorizationMachine

Factorization Machine model

FMTrainer

FM Trainer using Adam optimizer with mini-batch SGD.

FMTrainerLBFGS

FM Trainer using L-BFGS optimizer with full-batch optimization.

Module Contents#

class FactorizationMachine(n_features: int, latent_dim: int, non_binary_indices: list[int] | None = None)#

Bases: torch.nn.Module

Factorization Machine model

`math f(x|w, v) = w0 + Σwi*xi + ΣΣ<vj, vk>xj*xk + Σdi*xi^2 (for non-binary variables) = w0 + Σwi*xi + 1/2*(Σ(vi*xi)^2 - Σ(vi^2*xi^2)) + Σdi*xi^2 `

n_features#
latent_dim#
non_binary_indices = []#
linear#
quad#
forward(x)#
property v#
property w#
property w0#
property d#

Diagonal coefficients for non-binary variables

get_full_diag_coeffs() numpy.ndarray#

Get diagonal coefficients as full array (n_features,), with 0 for binary variables.

Returns:

Array of shape (n_features,) where non-binary variable indices

contain the learned diagonal coefficients and binary variable indices are 0.

Return type:

np.ndarray

class FMTrainer(n_features: int, latent_dim: int, config: jijzepttools.blackbox_optimization.fm_config.AdamConfig, non_binary_indices: list[int] | None = None)#

FM Trainer using Adam optimizer with mini-batch SGD.

n_features#
latent_dim#
config#
non_binary_indices = None#
setup_initial_state()#
fit(x_numpy: numpy.ndarray, y_numpy: numpy.ndarray, verbose: bool = True)#

Fit the Factorization Machine for surrogate model.

サロゲートモデル構築が目的なので、train_lossベースでのearly stoppingを行っており 予測目的に利用すると過学習する可能性がある。

Parameters:
  • x_numpy (np.ndarray) – input features, shape (n_samples, n_features)

  • y_numpy (np.ndarray) – target values, shape (n_samples,)

  • verbose (bool) – whether to show training progress bar, default True

predict(x: numpy.ndarray) numpy.ndarray#
property x#
property y#
get_qubo() tuple[numpy.ndarray, float]#
class FMTrainerLBFGS(n_features: int, latent_dim: int, config: jijzepttools.blackbox_optimization.fm_config.LBFGSConfig, non_binary_indices: list[int] | None = None)#

FM Trainer using L-BFGS optimizer with full-batch optimization.

L-BFGS is a quasi-Newton method that typically converges faster than first-order methods like Adam for small to medium-sized datasets.

n_features#
latent_dim#
config#
non_binary_indices = None#
setup_initial_state()#
fit(x_numpy: numpy.ndarray, y_numpy: numpy.ndarray, verbose: bool = True)#

Fit the Factorization Machine using L-BFGS optimizer.

L-BFGS performs full-batch optimization and typically converges in fewer iterations than mini-batch methods.

Parameters:
  • x_numpy (np.ndarray) – input features, shape (n_samples, n_features)

  • y_numpy (np.ndarray) – target values, shape (n_samples,)

  • verbose (bool) – whether to show optimization info (currently unused for L-BFGS)

predict(x: numpy.ndarray) numpy.ndarray#
property x#
property y#
get_qubo() tuple[numpy.ndarray, float]#