Search Space API¶
Overview¶
Tune has a native interface for specifying search spaces. You can specify the search space via tune.run(config=...).
Thereby, you can either use the tune.grid_search primitive to specify an axis of a grid search…
tune.run(
trainable,
config={"bar": tune.grid_search([True, False])})
… or one of the random sampling primitives to specify distributions (Random Distributions API):
tune.run(
trainable,
config={
"param1": tune.choice([True, False]),
"bar": tune.uniform(0, 10),
"alpha": tune.sample_from(lambda _: np.random.uniform(100) ** 2),
"const": "hello" # It is also ok to specify constant values.
})
Caution
If you use a Search Algorithm, you may not be able to specify lambdas or grid search with this interface, as some search algorithms may not be compatible.
To sample multiple times/run multiple trials, specify tune.run(num_samples=N. If grid_search is provided as an argument, the same grid will be repeated N times.
# 13 different configs.
tune.run(trainable, num_samples=13, config={
"x": tune.choice([0, 1, 2]),
}
)
# 13 different configs.
tune.run(trainable, num_samples=13, config={
"x": tune.choice([0, 1, 2]),
"y": tune.randn([0, 1, 2]),
}
)
# 4 different configs.
tune.run(trainable, config={"x": tune.grid_search([1, 2, 3, 4])}, num_samples=1)
# 3 different configs.
tune.run(trainable, config={"x": grid_search([1, 2, 3])}, num_samples=1)
# 6 different configs.
tune.run(trainable, config={"x": tune.grid_search([1, 2, 3])}, num_samples=2)
# 9 different configs.
tune.run(trainable, num_samples=1, config={
"x": tune.grid_search([1, 2, 3]),
"y": tune.grid_search([a, b, c])}
)
# 18 different configs.
tune.run(trainable, num_samples=2, config={
"x": tune.grid_search([1, 2, 3]),
"y": tune.grid_search([a, b, c])}
)
# 45 different configs.
tune.run(trainable, num_samples=5, config={
"x": tune.grid_search([1, 2, 3]),
"y": tune.grid_search([a, b, c])}
)
Note that grid search and random search primitives are inter-operable. Each can be used independently or in combination with each other.
# 6 different configs.
tune.run(trainable, num_samples=2, config={
"x": tune.sample_from(...),
"y": tune.grid_search([a, b, c])
}
)
In the below example, num_samples=10 repeats the 3x3 grid search 10 times, for a total of 90 trials, each with randomly sampled values of alpha and beta.
tune.run(
my_trainable,
name="my_trainable",
# num_samples will repeat the entire config 10 times.
num_samples=10
config={
# ``sample_from`` creates a generator to call the lambda once per trial.
"alpha": tune.sample_from(lambda spec: np.random.uniform(100)),
# ``sample_from`` also supports "conditional search spaces"
"beta": tune.sample_from(lambda spec: spec.config.alpha * np.random.normal()),
"nn_layers": [
# tune.grid_search will make it so that all values are evaluated.
tune.grid_search([16, 64, 256]),
tune.grid_search([16, 64, 256]),
],
},
)
Custom/Conditional Search Spaces¶
You’ll often run into awkward search spaces (i.e., when one hyperparameter depends on another). Use tune.sample_from(func) to provide a custom callable function for generating a search space.
The parameter func should take in a spec object, which has a config namespace from which you can access other hyperparameters. This is useful for conditional distributions:
tune.run(
...,
config={
# A random function
"alpha": tune.sample_from(lambda _: np.random.uniform(100)),
# Use the `spec.config` namespace to access other hyperparameters
"beta": tune.sample_from(lambda spec: spec.config.alpha * np.random.normal())
}
)
Here’s an example showing a grid search over two nested parameters combined with random sampling from two lambda functions, generating 9 different trials. Note that the value of beta depends on the value of alpha, which is represented by referencing spec.config.alpha in the lambda function. This lets you specify conditional parameter distributions.
tune.run(
my_trainable,
name="my_trainable",
config={
"alpha": tune.sample_from(lambda spec: np.random.uniform(100)),
"beta": tune.sample_from(lambda spec: spec.config.alpha * np.random.normal()),
"nn_layers": [
tune.grid_search([16, 64, 256]),
tune.grid_search([16, 64, 256]),
],
}
)
Random Distributions API¶
This section covers the functions you can use to define your search spaces.
For a high-level overview, see this example:
config = {
# Sample a float uniformly between -5.0 and -1.0
"uniform": tune.uniform(-5, -1),
# Sample a float uniformly between 3.2 and 5.4,
# rounding to increments of 0.2
"quniform": tune.quniform(3.2, 5.4, 0.2),
# Sample a float uniformly between 0.0001 and 0.01, while
# sampling in log space
"loguniform": tune.loguniform(1e-4, 1e-2),
# Sample a float uniformly between 0.0001 and 0.1, while
# sampling in log space and rounding to increments of 0.0005
"qloguniform": tune.qloguniform(1e-4, 1e-1, 5e-4),
# Sample a random float from a normal distribution with
# mean=10 and sd=2
"randn": tune.randn(10, 2),
# Sample a random float from a normal distribution with
# mean=10 and sd=2, rounding to increments of 0.2
"qrandn": tune.qrandn(10, 2, 0.2),
# Sample a integer uniformly between -9 (inclusive) and 15 (exclusive)
"randint": tune.randint(-9, 15),
# Sample a random uniformly between -21 (inclusive) and 12 (inclusive (!))
# rounding to increments of 3 (includes 12)
"qrandint": tune.qrandint(-21, 12, 3),
# Sample an option uniformly from the specified choices
"choice": tune.choice(["a", "b", "c"]),
# Sample from a random function, in this case one that
# depends on another value from the search space
"func": tune.sample_from(lambda spec: spec.config.uniform * 0.01),
# Do a grid search over these values. Every value will be sampled
# `num_samples` times (`num_samples` is the parameter you pass to `tune.run()`)
"grid": tune.grid_search([32, 64, 128])
}
tune.uniform¶
tune.quniform¶
-
ray.tune.quniform(lower: float, upper: float, q: float)[source]¶ Sample a quantized float value uniformly between
lowerandupper.Sampling from
tune.uniform(1, 10)is equivalent to sampling fromnp.random.uniform(1, 10))The value will be quantized, i.e. rounded to an integer increment of
q. Quantization makes the upper bound inclusive.
tune.loguniform¶
-
ray.tune.loguniform(lower: float, upper: float, base: float = 10)[source]¶ Sugar for sampling in different orders of magnitude.
- Parameters
lower (float) – Lower boundary of the output interval (e.g. 1e-4)
upper (float) – Upper boundary of the output interval (e.g. 1e-2)
base (int) – Base of the log. Defaults to 10.
tune.qloguniform¶
-
ray.tune.qloguniform(lower: float, upper: float, q: float, base: float = 10)[source]¶ Sugar for sampling in different orders of magnitude.
The value will be quantized, i.e. rounded to an integer increment of
q.Quantization makes the upper bound inclusive.
- Parameters
lower (float) – Lower boundary of the output interval (e.g. 1e-4)
upper (float) – Upper boundary of the output interval (e.g. 1e-2)
q (float) – Quantization number. The result will be rounded to an integer increment of this value.
base (int) – Base of the log. Defaults to 10.
tune.randn¶
tune.qrandn¶
-
ray.tune.qrandn(mean: float, sd: float, q: float)[source]¶ Sample a float value normally with
meanandsd.The value will be quantized, i.e. rounded to an integer increment of
q.- Parameters
mean (float) – Mean of the normal distribution.
sd (float) – SD of the normal distribution.
q (float) – Quantization number. The result will be rounded to an integer increment of this value.
tune.randint¶
tune.qrandint¶
-
ray.tune.qrandint(lower: int, upper: int, q: int = 1)[source]¶ Sample an integer value uniformly between
lowerandupper.loweris inclusive,upperis also inclusive (!).The value will be quantized, i.e. rounded to an integer increment of
q. Quantization makes the upper bound inclusive.Sampling from
tune.randint(10)is equivalent to sampling fromnp.random.randint(10)
tune.choice¶
Grid Search API¶
Internals¶
BasicVariantGenerator¶
-
class
ray.tune.suggest.BasicVariantGenerator(shuffle=False)[source]¶ Uses Tune’s variant generation for resolving variables.
See also: ray.tune.suggest.variant_generator.
- Parameters
shuffle (bool) – Shuffles the generated list of configurations.
User API:
from ray import tune from ray.tune.suggest import BasicVariantGenerator searcher = BasicVariantGenerator() tune.run(my_trainable_func, algo=searcher)
Internal API:
from ray.tune.suggest import BasicVariantGenerator searcher = BasicVariantGenerator() searcher.add_configurations({"experiment": { ... }}) list_of_trials = searcher.next_trials() searcher.is_finished == True