jijzepttools.blackbox_optimization.benchmark.benchmark_minto#
Benchmark script for blackbox optimization using minto for experiment tracking.
This script runs benchmarks on various blackbox optimization problems and methods, using minto.Experiment to track and save all experimental data.
- Usage:
python benchmark_minto.py –config config.yaml
The config.yaml file should contain experiment settings. See example_config.yaml for reference.
Functions#
|
Create a problem instance from configuration. |
|
Create a solver instance from configuration. |
|
Run a single benchmark experiment. |
|
Run a comprehensive benchmark suite based on configuration. |
|
Main function for command-line interface. |
Module Contents#
- create_problem(problem_config: Dict[str, Any])#
Create a problem instance from configuration.
- create_solver(solver_config: Dict[str, Any])#
Create a solver instance from configuration.
- run_single_benchmark(problem: jijzepttools.blackbox_optimization.benchmark.problem.interface.interface.BlackboxFunction, solver: jijzepttools.blackbox_optimization.benchmark.solver.interface.interface.SolverInterface, n_iter: int, n_initial: int, seed: int, problem_label: str, solver_label: str) minto.Experiment #
Run a single benchmark experiment.
- Parameters:
problem (BlackboxFunction) – Problem instance to solve
solver (SolverInterface) – Solver instance to use
n_iter (int) – Number of optimization iterations
n_initial (int) – Number of initial data points
seed (int) – Random seed for reproducibility
- Returns:
Experiment object containing the optimization history
- Return type:
minto.Experiment
- run_benchmark_suite(config: Dict[str, Any], config_filename: str) List[str] #
Run a comprehensive benchmark suite based on configuration.
- Parameters:
config (dict) – Configuration dictionary loaded from YAML file
- Returns:
List of saved experiment file paths
- Return type:
list of str
- main()#
Main function for command-line interface.