jijzepttools.blackbox_optimization.benchmark.visualization_minto#

Visualization utilities for blackbox optimization benchmark results using minto.

This module provides functions to load and visualize experimental results stored in minto format from benchmark runs.

The visualization is organized by problem, with each problem generating separate plots for different solvers. Solvers are identified by their ‘solver_label’ parameter, and problems are identified by their ‘problem_label’ parameter.

Attributes#

Functions#

find_minto_directories(→ List[str])

Search recursively for directories that contain both 'experiment' and 'runs'.

load_all_experiments(→ Dict[str, List[minto.Experiment]])

Load experiments and keep them grouped by their directory name.

extract_all_data(→ Dict[str, List[dict]])

Pull out execution‑time, y‑history, best‑y‑history in one sweep.

separate_data_by_problem(→ Dict[str, Dict[str, ...)

Group extracted data by problem_label.

build_solver_color_map(→ Dict[str, Tuple[float, float, ...)

Generate solver → colour mapping using Matplotlib default colour cycle order.

plot_execution_times_optimized(data[, save_path, figsize])

Scatter plot of execution times per solver.

plot_y_history_optimized(data[, save_path, figsize])

Non‑aggregated y history: each run plotted separately, colour fixed by solver.

plot_best_y_history_optimized(data[, save_path, figsize])

Non‑aggregated best‑y history: each run plotted separately, colour fixed by solver.

plot_y_history_aggregated(data[, save_path, figsize])

Aggregated y history (mean ±1σ) with fixed solver colours.

plot_best_y_history_aggregated(data[, save_path, figsize])

Aggregated best‑y history (mean ±1σ) with fixed solver colours.

visualize_benchmark_results(base_directory[, ...])

Wrapper to generate all plots.

main()

CLI entry.

Module Contents#

SOLVER_COLOR_MAP: Dict[str, Tuple[float, float, float, float]]#
find_minto_directories(base_directory: str) List[str]#

Search recursively for directories that contain both ‘experiment’ and ‘runs’.

load_all_experiments(base_directory: str) Dict[str, List[minto.Experiment]]#

Load experiments and keep them grouped by their directory name.

extract_all_data(experiments_by_dir: Dict[str, List[minto.Experiment]]) Dict[str, List[dict]]#

Pull out execution‑time, y‑history, best‑y‑history in one sweep.

separate_data_by_problem(data: Dict[str, List[dict]]) Dict[str, Dict[str, List[dict]]]#

Group extracted data by problem_label.

build_solver_color_map(all_solvers: List[str]) Dict[str, Tuple[float, float, float, float]]#

Generate solver → colour mapping using Matplotlib default colour cycle order.

plot_execution_times_optimized(data: Dict[str, List[dict]], save_path: str | None = None, figsize: Tuple[int, int] = (8, 8))#

Scatter plot of execution times per solver.

plot_y_history_optimized(data: Dict[str, List[dict]], save_path: str | None = None, figsize: Tuple[int, int] = (12, 5))#

Non‑aggregated y history: each run plotted separately, colour fixed by solver.

plot_best_y_history_optimized(data: Dict[str, List[dict]], save_path: str | None = None, figsize: Tuple[int, int] = (12, 5))#

Non‑aggregated best‑y history: each run plotted separately, colour fixed by solver.

plot_y_history_aggregated(data: Dict[str, List[dict]], save_path: str | None = None, figsize: Tuple[int, int] = (12, 5))#

Aggregated y history (mean ±1σ) with fixed solver colours.

plot_best_y_history_aggregated(data: Dict[str, List[dict]], save_path: str | None = None, figsize: Tuple[int, int] = (12, 5))#

Aggregated best‑y history (mean ±1σ) with fixed solver colours.

visualize_benchmark_results(base_directory: str, output_directory: str | None = None, show_plots: bool = False)#

Wrapper to generate all plots.

main()#

CLI entry.