Analysis (tune.analysis)¶
You can use the ExperimentAnalysis
object for analyzing results. It is returned automatically when calling tune.run
.
analysis = tune.run(
trainable,
name="example-experiment",
num_samples=10,
)
Here are some example operations for obtaining a summary of your experiment:
# Get a dataframe for the last reported results of all of the trials
df = analysis.dataframe()
# Get a dataframe for the max accuracy seen for each trial
df = analysis.dataframe(metric="mean_accuracy", mode="max")
# Get a dict mapping {trial logdir -> dataframes} for all trials in the experiment.
all_dataframes = analysis.trial_dataframes
# Get a list of trials
trials = analysis.trials
You may want to get a summary of multiple experiments that point to the same local_dir
. For this, you can use the Analysis
class.
from ray.tune import Analysis
analysis = Analysis("~/ray_results/example-experiment")
ExperimentAnalysis (tune.ExperimentAnalysis)¶
-
class
ray.tune.
ExperimentAnalysis
(experiment_checkpoint_path, trials=None, default_metric=None, default_mode=None)[source]¶ Bases:
ray.tune.analysis.experiment_analysis.Analysis
Analyze results from a Tune experiment.
To use this class, the experiment must be executed with the JsonLogger.
- Parameters
experiment_checkpoint_path (str) – Path to a json file representing an experiment state. Corresponds to Experiment.local_dir/Experiment.name/experiment_state.json
trials (list|None) – List of trials that can be accessed via analysis.trials.
default_metric (str) – Default metric for comparing results. Can be overwritten with the
metric
parameter in the respective functions.default_mode (str) – Default mode for comparing results. Has to be one of [min, max]. Can be overwritten with the
mode
parameter in the respective functions.
Example
>>> tune.run(my_trainable, name="my_exp", local_dir="~/tune_results") >>> analysis = ExperimentAnalysis( >>> experiment_checkpoint_path="~/tune_results/my_exp/state.json")
-
get_best_trial
(metric=None, mode=None, scope='all')[source]¶ Retrieve the best trial object.
Compares all trials’ scores on
metric
. Ifmetric
is not specified,self.default_metric
will be used. If mode is not specified,self.default_mode
will be used. These values are usually initialized by passing themetric
andmode
parameters totune.run()
.- Parameters
metric (str) – Key for trial info to order on. Defaults to
self.default_metric
.mode (str) – One of [min, max]. Defaults to
self.default_mode
.scope (str) – One of [all, last, avg, last-5-avg, last-10-avg]. If scope=last, only look at each trial’s final step for metric, and compare across trials based on mode=[min,max]. If scope=avg, consider the simple average over all steps for metric and compare across trials based on mode=[min,max]. If scope=last-5-avg or scope=last-10-avg, consider the simple average over the last 5 or 10 steps for metric and compare across trials based on mode=[min,max]. If scope=all, find each trial’s min/max score for metric based on mode, and compare trials based on mode=[min,max].
-
get_best_config
(metric=None, mode=None, scope='all')[source]¶ Retrieve the best config corresponding to the trial.
Compares all trials’ scores on metric. If
metric
is not specified,self.default_metric
will be used. If mode is not specified,self.default_mode
will be used. These values are usually initialized by passing themetric
andmode
parameters totune.run()
.- Parameters
metric (str) – Key for trial info to order on. Defaults to
self.default_metric
.mode (str) – One of [min, max]. Defaults to
self.default_mode
.scope (str) – One of [all, last, avg, last-5-avg, last-10-avg]. If scope=last, only look at each trial’s final step for metric, and compare across trials based on mode=[min,max]. If scope=avg, consider the simple average over all steps for metric and compare across trials based on mode=[min,max]. If scope=last-5-avg or scope=last-10-avg, consider the simple average over the last 5 or 10 steps for metric and compare across trials based on mode=[min,max]. If scope=all, find each trial’s min/max score for metric based on mode, and compare trials based on mode=[min,max].
-
get_best_logdir
(metric=None, mode=None, scope='all')[source]¶ Retrieve the logdir corresponding to the best trial.
Compares all trials’ scores on metric. If
metric
is not specified,self.default_metric
will be used. If mode is not specified,self.default_mode
will be used. These values are usually initialized by passing themetric
andmode
parameters totune.run()
.- Parameters
metric (str) – Key for trial info to order on. Defaults to
self.default_metric
.mode (str) – One of [min, max]. Defaults to
self.default_mode
.scope (str) – One of [all, last, avg, last-5-avg, last-10-avg]. If scope=last, only look at each trial’s final step for metric, and compare across trials based on mode=[min,max]. If scope=avg, consider the simple average over all steps for metric and compare across trials based on mode=[min,max]. If scope=last-5-avg or scope=last-10-avg, consider the simple average over the last 5 or 10 steps for metric and compare across trials based on mode=[min,max]. If scope=all, find each trial’s min/max score for metric based on mode, and compare trials based on mode=[min,max].
Analysis (tune.Analysis)¶
-
class
ray.tune.
Analysis
(experiment_dir, default_metric=None, default_mode=None)[source]¶ Analyze all results from a directory of experiments.
To use this class, the experiment must be executed with the JsonLogger.
- Parameters
experiment_dir (str) – Directory of the experiment to load.
default_metric (str) – Default metric for comparing results. Can be overwritten with the
metric
parameter in the respective functions.default_mode (str) – Default mode for comparing results. Has to be one of [min, max]. Can be overwritten with the
mode
parameter in the respective functions.
-
dataframe
(metric=None, mode=None)[source]¶ Returns a pandas.DataFrame object constructed from the trials.
- Parameters
metric (str) – Key for trial info to order on. If None, uses last result.
mode (str) – One of [min, max].
- Returns
Constructed from a result dict of each trial.
- Return type
pd.DataFrame
-
get_best_config
(metric=None, mode=None)[source]¶ Retrieve the best config corresponding to the trial.
- Parameters
metric (str) – Key for trial info to order on. Defaults to
self.default_metric
.mode (str) – One of [min, max]. Defaults to
self.default_mode
.
-
get_best_logdir
(metric=None, mode=None)[source]¶ Retrieve the logdir corresponding to the best trial.
- Parameters
metric (str) – Key for trial info to order on. Defaults to
self.default_metric
.mode (str) – One of [min, max]. Defaults to
self.default_mode
.
-
get_all_configs
(prefix=False)[source]¶ Returns a list of all configurations.
- Parameters
prefix (bool) – If True, flattens the config dict and prepends config/.
- Returns
List of all configurations of trials,
- Return type
List[dict]
-
get_trial_checkpoints_paths
(trial, metric=None)[source]¶ Gets paths and metrics of all persistent checkpoints of a trial.
- Parameters
trial (Trial) – The log directory of a trial, or a trial instance.
metric (str) – key for trial info to return, e.g. “mean_accuracy”. “training_iteration” is used by default if no value was passed to
self.default_metric
.
- Returns
List of [path, metric] for all persistent checkpoints of the trial.
-
get_best_checkpoint
(trial, metric=None, mode=None)[source]¶ Gets best persistent checkpoint path of provided trial.
- Parameters
trial (Trial) – The log directory of a trial, or a trial instance.
metric (str) – key of trial info to return, e.g. “mean_accuracy”. “training_iteration” is used by default if no value was passed to
self.default_metric
.mode (str) – One of [min, max]. Defaults to
self.default_mode
.
- Returns
Path for best checkpoint of trial determined by metric
-
property
trial_dataframes
¶ List of all dataframes of the trials.