jijzepttools.blackbox_optimization.sparse_bayesian_linear_reg#

Classes#

Trace

SparseBayesianLinearRegression

Sparse Bayesian Linear Regression using Horseshoe prior distribution.

Functions#

sample_from_inv_gamma(shape, scale)

sample_theta(x, y, lamb2, tau2, sigma2)

sample_sigma2(x, y, theta, lamb2, tau2)

sample_lamb2(theta, sigma2, tau2, nu)

sample_tau2(theta, sigma2, lamb2, xi)

sample_nu(lamb2)

sample_xi(tau2)

horseshoe_gibbs_sampling(x, y, theta, sigma2, lamb2, ...)

get_nonzero_column_indices(x[, logger])

Module Contents#

class Trace#
coef: numpy.ndarray#
bias: numpy.ndarray#
lambda2: numpy.ndarray#
sigma2: float#
tau2: float#
nu: numpy.ndarray#
xi: float#
property num_samples#
class SparseBayesianLinearRegression(random_seed: int | None = None, warm_start: bool = False, keep_one_trace: bool = True)#

Sparse Bayesian Linear Regression using Horseshoe prior distribution.

This class implements Bayesian linear regression with automatic sparsity induction through the Horseshoe prior. The model assumes:

y_i ~ Normal(β_0 + β^T x_i, sigma^2)

where β_0 is the bias term and β are the regression coefficients. The coefficients are combined as theta = (β_0, β) and sampled together via the sample_theta method.

The Horseshoe prior automatically determines which coefficients should be shrunk towards zero (sparse) and which should remain active, without requiring manual hyperparameter tuning.

Gibbs Sampling Parameters: - theta = (β_0, β): Regression coefficients (feature coefficients + bias) - sigma^2: Observation noise variance - lambda^2: Local shrinkage parameters (one per coefficient) - tau^2: Global shrinkage parameter - nu, xi: Auxiliary variables for efficient sampling

All parameters are treated as random variables and sampled from their posterior distributions - no fixed hyperparameters are used.

Each fit() call appends results to the internal trace list. The learned model parameters from the most recent fit() are stored in trace[-1].coef and trace[-1].bias: - coef: shape (draws, n_features) - regression coefficients β for input features - bias: shape (draws,) - bias terms β_0

Each contains draws posterior samples, where coefficient order follows the input feature order in x.

rs#
trace: list[Trace] = []#
warm_start = False#
keep_one_trace = True#
fit(x: numpy.ndarray, y: numpy.ndarray, draws: int = 10, tune: int = 100)#
predict(x: numpy.ndarray) numpy.ndarray#

Predict using the most recent trace of coefficients and bias.

Parameters:

x (np.ndarray, shape (n_samples, n_features)) – Input features for prediction.

Returns:

Predicted values.

Return type:

np.ndarray, shape (n_samples,)

sample_from_inv_gamma(shape, scale)#
sample_theta(x, y, lamb2, tau2, sigma2)#
sample_sigma2(x, y, theta, lamb2, tau2)#
sample_lamb2(theta, sigma2, tau2, nu)#
sample_tau2(theta, sigma2, lamb2, xi)#
sample_nu(lamb2)#
sample_xi(tau2)#
horseshoe_gibbs_sampling(x, y, theta, sigma2, lamb2, tau2, nu, xi, max_iter)#
get_nonzero_column_indices(x, logger=None)#