TunerSuccessiveHalving class that implements the successive halving algorithm (SHA). SHA randomly samples n candidate hyperparameter configurations and allocates a minimum budget (r_min) to all candidates. The candidates are raced down in stages to a single best candidate by repeatedly increasing the budget by a factor of eta and promoting only the best 1 / eta  fraction to the next stage. This means promising hyperparameter configurations are allocated a higher budget overall and lower performing ones are discarded early on.

The budget hyperparameter must be tagged with "budget" in the search space. The minimum budget (r_min) which is allocated in the base stage, is set by the lower bound of the budget parameter. The upper bound defines the maximum budget (r_max) which is allocated to the candidates in the last stage. The number of stages is computed so that each candidate in base stage is allocated the minimum budget and the candidates in the last stage are not evaluated on more than the maximum budget. The following table is the stage layout for eta = 2, r_min = 1 and r_max = 8.

 i n_i r_i 0 8 1 1 4 2 2 2 4 3 1 8

i is stage number, n_i is the number of configurations and r_i is the budget allocated to a single configuration.

## Source

Jamieson K, Talwalkar A (2016). “Non-stochastic Best Arm Identification and Hyperparameter Optimization.” In Gretton A, Robert CC (eds.), Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, volume 51 series Proceedings of Machine Learning Research, 240-248. http://proceedings.mlr.press/v51/jamieson16.html.

## Subsample Budget

If the learner lacks a natural budget parameter, mlr3pipelines::PipeOpSubsample can be applied to use the subsampling rate as budget parameter. The resulting mlr3pipelines::GraphLearner is fitted on small proportions of the mlr3::Task in the first stage, and on the complete task in last stage.

## Parameters

n

integer(1)
Number of candidates in base stage.

eta

numeric(1)
With every stage, the budget is increased by a factor of eta and only the best 1 / eta candidates are promoted to the next stage. Non-integer values are supported, but eta is not allowed to be less or equal 1.

sampler

Object defining how the samples of the parameter space should be drawn. The default is uniform sampling.

repeats

logical(1)
If FALSE (default), SHA terminates once all stages are evaluated. Otherwise, SHA starts over again once the last stage is evaluated.

adjust_minimum_budget

logical(1)
If TRUE, minimum budget is increased so that the last stage uses the maximum budget defined in the search space.

## Archive

The mlr3tuning::ArchiveTuning holds the following additional columns that are specific to the successive halving algorithm:

• stage (integer(1))
Stage index. Starts counting at 0.

• repetition (integer(1))
Repetition index. Start counting at 1.

## Custom Sampler

Hyperband supports custom paradox::Sampler object for initial configurations in each bracket. A custom sampler may look like this (the full example is given in the examples section):

# - beta distribution with alpha = 2 and beta = 5
# - categorical distribution with custom probabilities
sampler = SamplerJointIndep$new(list( Sampler1DRfun$new(params[[2]], function(n) rbeta(n, 2, 5)),

## Parallelization

The hyperparameter configurations of one stage are evaluated in parallel with the future package. To select a parallel backend, use future::plan().

## Logging

Hyperband uses a logger (as implemented in lgr) from package bbotk. Use lgr::get_logger("bbotk") to access and control the logger.

## Super classes

mlr3tuning::Tuner -> mlr3tuning::TunerFromOptimizer -> TunerSuccessiveHalving

## Methods

### Public methods

Inherited methods

### Method new()

Creates a new instance of this R6 class.

#### Arguments

deep

Whether to make a deep clone.

## Examples

if(requireNamespace("xgboost")) {
library(mlr3learners)

# define hyperparameter and budget parameter
search_space = ps(
nrounds = p_int(lower = 1, upper = 16, tags = "budget"),
eta = p_dbl(lower = 0, upper = 1),
booster = p_fct(levels = c("gbtree", "gblinear", "dart"))
)

# \donttest{
# hyperparameter tuning on the pima indians diabetes data set
instance = tune(
method = "successive_halving",
learner = lrn("classif.xgboost", eval_metric = "logloss"),
resampling = rsmp("cv", folds = 3),
measures = msr("classif.ce"),
search_space = search_space,
term_evals = 100
)

# best performing hyperparameter configuration
instance\$result
# }
}
#>    nrounds       eta booster learner_param_vals  x_domain classif.ce
#> 1:       8 0.3942863  gbtree          <list[6]> <list[3]>  0.2317708