jijzepttools.blackbox_optimization.factorization_machine#

Classes#

FactorizationMachine

Factorization Machine model

FMTrainer

FM Trainer using Adam optimizer with mini-batch SGD.

FMTrainerLBFGS

FM Trainer using L-BFGS optimizer with full-batch optimization.

Module Contents#

class FactorizationMachine(n_features: int, latent_dim: int)#

Bases: torch.nn.Module

Factorization Machine model

`math f(x|w, v) = w0 + Σwi*xi + ΣΣ<vj, vk>xj*xk = w0 + Σwi*xi + 1/2*(Σ(vi*xi)^2 - Σ(vi^2*xi^2)) `

n_features#
latent_dim#
linear#
quad#
forward(x)#
property v#
property w#
property w0#
class FMTrainer(n_features: int, latent_dim: int, config: jijzepttools.blackbox_optimization.fm_config.AdamConfig)#

FM Trainer using Adam optimizer with mini-batch SGD.

n_features#
latent_dim#
config#
setup_initial_state()#
fit(x_numpy: numpy.ndarray, y_numpy: numpy.ndarray, verbose: bool = True)#

Fit the Factorization Machine for surrogate model.

サロゲートモデル構築が目的なので、train_lossベースでのearly stoppingを行っており 予測目的に利用すると過学習する可能性がある。

Parameters:
  • x_numpy (np.ndarray) – input features, shape (n_samples, n_features)

  • y_numpy (np.ndarray) – target values, shape (n_samples,)

  • verbose (bool) – whether to show training progress bar, default True

predict(x: numpy.ndarray) numpy.ndarray#
property x#
property y#
get_qubo() tuple[numpy.ndarray, float]#
class FMTrainerLBFGS(n_features: int, latent_dim: int, config: jijzepttools.blackbox_optimization.fm_config.LBFGSConfig)#

FM Trainer using L-BFGS optimizer with full-batch optimization.

L-BFGS is a quasi-Newton method that typically converges faster than first-order methods like Adam for small to medium-sized datasets.

n_features#
latent_dim#
config#
setup_initial_state()#
fit(x_numpy: numpy.ndarray, y_numpy: numpy.ndarray, verbose: bool = True)#

Fit the Factorization Machine using L-BFGS optimizer.

L-BFGS performs full-batch optimization and typically converges in fewer iterations than mini-batch methods.

Parameters:
  • x_numpy (np.ndarray) – input features, shape (n_samples, n_features)

  • y_numpy (np.ndarray) – target values, shape (n_samples,)

  • verbose (bool) – whether to show optimization info (currently unused for L-BFGS)

predict(x: numpy.ndarray) numpy.ndarray#
property x#
property y#
get_qubo() tuple[numpy.ndarray, float]#