AutonomousExperimenter

class gpcam.autonomous_experimenter.AutonomousExperimenterGP(input_space_bounds, hyperparameters=None, hyperparameter_bounds=None, instrument_function=None, init_dataset_size=None, acquisition_function='variance', cost_function=None, cost_update_function=None, cost_function_parameters={}, kernel_function=None, prior_mean_function=None, noise_function=None, run_every_iteration=None, x_data=None, y_data=None, noise_variances=None, dataset=None, communicate_full_dataset=False, compute_device='cpu', store_inv=False, training_dask_client=None, acq_func_opt_dask_client=None, gp2Scale=False, gp2Scale_dask_client=None, gp2Scale_batch_size=10000, ram_economy=True, info=False, args=None)

Executes the autonomous loop for a single-task Gaussian process. Use class AutonomousExperimenterFvGP for multi-task experiments. The AutonomousExperimenter is a convenience-driven functionality that does not allow as much customization as using the GPOptimizer directly. But it is great option to start with.

Parameters:
  • input_space_bounds (np.ndarray) – A numpy array of floats of shape D x 2 describing the input space.

  • hyperparameters (np.ndarray, optional) – Vector of hyperparameters used by the GP initially. This class provides methods to train hyperparameters. The default is a random draw from a uniform distribution within hyperparameter_bounds, with a shape appropriate for the default kernel (D + 1), which is an anisotropic Matern kernel with automatic relevance determination (ARD). If gp2Scale is enabled, the default kernel changes to the anisotropic Wendland kernel.

  • hyperparameter_bounds (np.ndarray, optional) – A 2d numpy array of shape (N x 2), where N is the number of needed hyperparameters. The default is None, in which case the hyperparameter_bounds are estimated from the domain size and the initial y_data. If the data changes significantly, the hyperparameters and the bounds should be changed/retrained. Initial hyperparameters and bounds can also be set in the train calls. The default only works for the default kernels.

  • instrument_function (Callable, optional) – A function that takes data points (a list of dicts), and returns the same with the measurement data filled in. The function is expected to communicate with the instrument and perform measurements, populating fields of the data input.

  • init_dataset_size (int, optional) – If x and y are not provided and dataset is not provided, init_dataset_size must be provided. An initial dataset is constructed randomly with this length. The instrument_function is immediately called to measure values at these initial points.

  • acquisition_function (Callable, optional) – The acquisition function accepts as input a numpy array of size V x D (such that V is the number of input points, and D is the parameter space dimensionality) and a GPOptimizer object. The return value is 1-D array of length V providing ‘scores’ for each position, such that the highest scored point will be measured next. Built-in functions can be used by one of the following keys: ucb,`lcb`,`maximum`, minimum, variance,`expected_improvement`, relative information entropy,`relative information entropy set`, probability of improvement, gradient,`total correlation`,`target probability`. If None, the default function variance, meaning fvgp.GP.posterior_covariance with variance_only = True will be used. The acquisition function can be a callable of the form my_func(x,gpcam.GPOptimizer) which will be maximized (!!!), so make sure desirable new measurement points will be located at maxima. Explanations of the acquisition functions: variance: simply the posterior variance relative information entropy: the KL divergence of the prior over predictions and the posterior relative information entropy set: the KL divergence of the prior defined over predictions and the posterior point-by-point ucb: upper confidence bound, posterior mean + 3. std lcb: lower confidence bound, -(posterior mean - 3. std) maximum: finds the maximum of the current posterior mean minimum: finds the maximum of the current posterior mean gradient: puts focus on high-gradient regions probability of improvement: as the name would suggest expected improvement: as the name would suggest total correlation: extension of mutual information to more than 2 random variables target probability: probability of a target; needs a dictionary GPOptimizer.args = {‘a’: lower bound, ‘b’: upper bound} to be defined.

  • cost_function (Callable, optional) – A function encoding the cost of motion through the input space and the cost of a measurements. Its inputs are an origin (np.ndarray of size V x D), x (np.ndarray of size V x D), and the value of cost_func_params; origin is the starting position, and x is the destination position. The return value is a 1-D array of length V describing the costs as floats. The ‘score’ from acquisition_function is divided by this returned cost to determine the next measurement point. If None, the default is a uniform cost of 1.

  • cost_update_function (Callable, optional) – A function that updates the cost_func_params which are communicated to the cost_function. This function accepts as input costs (a list of cost values determined by instrument_function), bounds (a V x 2 numpy array) and a parameters object. The default is a no-op.

  • cost_function_parameters (Any, optional) – An object that is communicated to the cost_function and cost_update_function. The default is {}.

  • kernel_function (Callable, optional) – A symmetric positive semi-definite covariance function (a kernel) that calculates the covariance between data points. It is a function of the form k(x1,x2,hyperparameters, obj). The input x1 is a N1 x D array of positions, x2 is a N2 x D array of positions, the hyperparameters argument is a 1d array of length D+1 for the default kernel and of a different userdefined length for other kernels obj is an fvgp.GP instance. The default is a stationary anisotropic kernel (fvgp.GP.default_kernel) which performs automatic relevance determination (ARD). The output is a covariance matrix, an N1 x N2 numpy array.

  • prior_mean_function (Callable, optional) – A function that evaluates the prior mean at a set of input position. It accepts as input an array of positions (of shape N1 x D), hyperparameters (a 1d array of length D+1 for the default kernel) and a fvgp.GP instance. The return value is a 1d array of length N1. If None is provided, fvgp.GP._default_mean_function is used.

  • noise_function (Callable optional) – The noise function is a callable f(x,hyperparameters,obj) that returns a positive symmetric definite matrix of shape(len(x),len(x)). The input x is a numpy array of shape (N x D). The hyperparameter array is the same that is communicated to mean and kernel functions. The obj is a fvgp.GP instance.

  • run_every_iteration (Callable, optional) – A function that is run at every iteration. It accepts as input a gpcam.AutonomousExperimenterGP instance. The default is a no-op.

  • x_data (np.ndarray, optional) – Initial data point positions.

  • y_data (np.ndarray, optional) – Initial data point values.

  • noise_variances (np.ndarray, optional) – Initial data point observation variances.

  • dataset (string, optional) – A filename of a gpcam-generated file that is used to initialize a new instance.

  • communicate_full_dataset (bool, optional) – If True, the full dataset will be communicated to the instrument_function on each iteration. If False, only the newly suggested data points will be communicated. The default is False.

  • compute_device (str, optional) – One of “cpu” or “gpu”, determines how linear system solves are run. The default is “cpu”.

  • store_inv (bool, optional) – If True, the algorithm calculates and stores the inverse of the covariance matrix after each training or update of the dataset or hyperparameters, which makes computing the posterior covariance faster. For larger problems (>2000 data points), the use of inversion should be avoided due to computational instability and costs. The default is True. Note, the training will always use Cholesky or LU decomposition instead of the inverse for stability reasons. Storing the inverse is a good option when the dataset is not too large and the posterior covariance is heavily used.

  • training_dask_client (distributed.client.Client, optional) – A Dask Distributed Client instance for distributed training. If None is provided, a new dask.distributed.Client instance is constructed.

  • acq_func_opt_dask_client (distributed.client.Client, optional) – A Dask Distributed Client instance for distributed acquisition_function computation. If None is provided, a new dask.distributed.Client instance is constructed.

  • info (bool, optional) – Specifies if info should be displayed. Default = False.

x_data

Data point positions

Type:

np.ndarray

y_data

Data point values

Type:

np.ndarray

variances

Data point observation variances

Type:

np.ndarray

data.dataset

All data

Type:

list

hyperparameter_bounds

A 2d array of floats of size J x 2, such that J is the length matching the length of hyperparameters defining the bounds for training.

Type:

np.ndarray

gp_optimizer

A GPOptimizer instance used for initializing a Gaussian process and performing optimization of the posterior.

Type:

gpcam.GPOptimizer

go(N=1000000000000000.0, breaking_error=1e-50, retrain_globally_at=(20, 50, 100, 400, 1000), retrain_locally_at=(20, 40, 60, 80, 100, 200, 400, 1000), retrain_async_at=(), update_cost_func_at=(), acq_func_opt_setting=<function AutonomousExperimenterGP.<lambda>>, training_opt_max_iter=20, training_opt_pop_size=10, training_opt_tol=1e-06, acq_func_opt_max_iter=20, acq_func_opt_pop_size=20, acq_func_opt_tol=1e-06, acq_func_opt_tol_adjust=0.1, number_of_suggested_measurements=1, checkpoint_filename=None, constraints=(), break_condition_callable=<function AutonomousExperimenterGP.<lambda>>)

Function to start the autonomous-data-acquisition loop.

Parameters:
  • N (int, optional) – Run until N points are measured. The default is 1e15.

  • breaking_error (float, optional) – Run until breaking_error is achieved (or at max N). The default is 1e-50.

  • retrain_globally_at (Iterable [int], optional) – Retrains the hyperparameters at the given number of measurements using global optimization. The default is [20,50,100,400,1000].

  • retrain_locally_at (Iterable[int], optional) – Retrains the hyperparameters at the given number of measurements using local gradient-based optimization. The default is [20,40,60,80,100,200,400,1000].

  • retrain_async_at (Iterable[int], optional) – Retrains the hyperparameters at the given number of measurements using the HGDL algorithm. This training is asynchronous and can be run in a distributed fashion using training_dask_client. The default is [].

  • update_cost_func_at (Iterable[int], optional) – Calls the update_cost_function at the given number of measurements. Default = ()

  • acq_func_opt_setting (Callable, optional) – A callable that accepts as input the iteration index and returns either local, global, hgdl. This switches between local gradient-based, global and hybrid optimization for the acquisition function. The default is lambda number: “global” if number % 2 == 0 else “local”.

  • training_opt_max_iter (int, optional) – The maximum number of iterations for any training. The default value is 20.

  • training_opt_pop_size (int, optional) – The population size used for any training with a global component (HGDL or standard global optimizers). The default value is 10.

  • training_opt_tol (float, optional) – The optimization tolerance for all training optimization. The default is 1e-6.

  • acq_func_opt_max_iter (int, optional) – The maximum number of iterations for the acquisition_function optimization. The default is 20.

  • acq_func_opt_pop_size (int, optional) – The population size used for any acquisition_function optimization with a global component (HGDL or standard global optimizers). The default value is 20.

  • acq_func_opt_tol (float, optional) – The optimization tolerance for all acquisition_function optimization. The default value is 1e-6

  • acq_func_opt_tol_adjust (float, optional) – The acquisition_function optimization tolerance is adjusted at every iteration as a fraction of this value. The default value is 0.1 .

  • number_of_suggested_measurements (int, optional) – The algorithm will try to return this many suggestions for new measurements. This may be limited by how many optima the algorithm may find. If greater than 1, then the acquisition_function optimization method is automatically set to use HGDL. The default is 1.

  • checkpoint_filename (str, optional) – When provided, a checkpoint of all the accumulated data will be written to this file on each iteration.

  • constraints (tuple, optional) – If provided, this subjects the acquisition function optimization to constraints. For the definition of the constraints, follow the structure your chosen optimizer requires.

  • break_condition_callable (Callable, optional) – Autonomous loop will stop when this function returns True. The function takes as input a gpcam.AutonomousExperimenterGP instance.

kill_all_clients()

Function to kill both dask.distributed.Client instances. Will be called automatically at the end of go().

kill_training()

Function to stop an asynchronous training. This leaves the dask.distributed.Client alive.

train(init_hyperparameters=None, pop_size=10, tol=0.0001, max_iter=20, method='global', constraints=())

This function finds the maximum of the log marginal likelihood and therefore trains the GP (synchronously). The GP prior will automatically be updated with the new hyperparameters after the training.

Parameters:
  • init_hyperparameters (np.ndarray, optional) – Initial hyperparameters used as starting location for all optimizers with local component. The default is a random draw from a uniform distribution within the bounds.

  • pop_size (int, optional) – A number of individuals used for any optimizer with a global component. Default = 20.

  • tol (float, optional) – Used as termination criterion for local optimizers. Default = 0.0001.

  • max_iter (int, optional) – Maximum number of iterations for global and local optimizers. Default = 120.

  • method (str, optional) – Method to be used for the training. Default is global which means a differential evolution algorithm is run with the specified parameters. The options are global or local, or mcmc.

  • constraints (tuple of object instances, optional) – Equality and inequality constraints for the optimization. If the optimizer is hgdl see hgdl.readthedocs.io. If the optimizer is a scipy optimizer, see the scipy documentation.

train_async(init_hyperparameters=None, max_iter=10000, local_method='L-BFGS-B', global_method='genetic', constraints=())

Function to train the Gaussian Process asynchronously using the HGDL optimizer. The use is entirely optional; this function will be called as part of the go() command, if so specified. This call starts a highly parallelized optimization process, on an architecture specified by the dask.distributed.Client. The main purpose of this function is to allow for large-scale distributed training.

Parameters:
  • init_hyperparameters (np.ndarray, optional) – Initial hyperparameters used as starting location for all optimizers with local component. The default is a random draw from a uniform distribution within the bounds.

  • max_iter (int, optional) – Maximum number of iterations for the global method. Default = 10000 It is important to remember here that the call is run asynchronously, so this number does not affect run time.

  • local_method (str, optional) – Local method to be used inside HGDL. Many scipy.optimize.minimize methods can be used, or a user-defined callable. Please read the HGDL docs for more information. Default = L-BFGS-B.

  • global_method (str, optional) – Local method to be used inside HGDL. Please read the HGDL docs for more information. Default = genetic.

  • constraints (tuple of object instances, optional) – Equality and inequality constraints for the optimization. See hgdl.readthedocs.io for setting up constraints.

update_hps()

Function to update the hyperparameters if an asynchronous training is running. Will be called during go() as specified.