Examples

M-LOOP includes a series of example configuration files for each of the controllers and interfaces. The examples can be found in examples folder. For some controllers there are two files, ones ending with _basic_config which includes the standard configuration options and _complete_config which include a comprehensive list of all the configuration options available.

The options available are also comprehensively documented in the M-LOOP API as keywords for each of the classes. However, the quickest and easiest way to learn what options are available, if you are not familiar with python, is to just look at the provided examples.

Each of the example files is used when running tests of M-LOOP. So please copy and modify them elsewhere if you use them as a starting point for your configuration file.

Interfaces

There are currently two interfaces supported: ‘file’ and ‘shell’. You can specify which interface you want with the option:

interface_type = [name]

The default will be ‘file’. The specific options for each of the interfaces are described below.

File Interface

The file interface exchanges information with the experiment by writing files to disk. You can change the names of the files used for the file interface and their type. The file interface options are described in file_interface_config.txt.

#File Interface Options
#----------------------

interface_type = 'file'                #The type of interface
interface_out_filename = 'exp_input'   #The filename of the file output by the interface and input into the experiment
interface_in_filename  = 'exp_output'  #The filename o the file input into the interface and output by the experiment
interface_file_type = 'txt'            #The file_type of both the input and output files, can be 'txt', 'pkl' or 'mat'.

Shell Interface

The shell interface is for experiments that can be run through a command executed in a shell. Information is then piped between M-LOOP and the experiment through the shell. You can change the command to run the experiment and the way the parameters are formatted. The shell interface options are described in shell_interface_config.txt

#Command Line Interface Options
#------------------------------

interface_type = 'shell'                        #The type of interface
command = 'python shell_script.py'      #The command for the command line to run the experiment to get a cost from the parameters
params_args_type = 'direct'             #The format of the parameters when providing them on the command line. 'direct' simply appends them, e.g. python shell_script.py 7 2 1, 'named' names each parameter, e.g. python shell_script.py --param1 7 --param2 2 --param3 1

Controllers

There are currently three controller types supported: ‘gaussian_process’, ‘random’ and ‘nelder_mead’. The default is ‘gaussian_process’. You can set which interface you want to use with the option:

controller_type = [name]

Each of the controllers and their specific options are described below. There is also a set of common options shared by all controllers which is described in controller_options.txt. The common options include the parameter settings and the halting conditions.

#General Controller Options
#--------------------------

#Halting conditions
max_num_runs = 1000                       #number of planned runs
target_cost = 0.1                         #cost to beat
max_num_runs_without_better_params = 100  #max allowed number of runs between finding better parameters

#Parameter controls
num_params = 2          #Number of parameters
min_boundary = [0,0]    #Minimum value for each parameter
max_boundary = [2,2]    #Maximum value for each parameter

#Filename related
controller_archive_filename = 'agogo'     #filename prefix for controller archive
controller_archive_file_type = 'mat'      #file_type for controller archive 
learner_archive_filename = 'ogoga'        #filename prefix for learner archive
learner_archive_file_type = 'pkl'         #file_type for learner archive
archive_extra_dict = {'test':'this_is'}   #dictionary of any extra data to be put in archive

Gaussian process

The Gaussian-process controller is the default controller and is the currently the most sophisticated machine learner algorithm. It uses a Link Gaussian process to develop a model for how the parameters relate to the measured cost, effectively creating a model for how the experiment operates. This model is then used when picking new points to test.

There are two example files for the Gaussian-process controller: gaussian_process_simple_config.txt which contains the basic options.

#Gaussian Process Basic Options
#------------------------------

#General options
max_num_runs = 100                #number of planned runs
target_cost = 0.1

#Gaussian process controller options
controller_type = 'gaussian_process'   #name of controller to use
num_params = 3                         #number of parameters
min_boundary = [-0.8,-0.9,-1.1]        #minimum boundary
max_boundary = [0.8,0.9,1.1]           #maximum boundary
trust_region = 0.4                     #maximum % move distance from best params
cost_has_noise = False                 #whether cost function has noise

gaussian_process_complete_config.txt which contains a comprehensive list of options.

#Gaussian Process Complete Options
#---------------------------------

#General options
max_num_runs = 100               #number of planned runs
target_cost = 0.1                #cost to beat

#Gaussian process options
controller_type = 'gaussian_process'
num_params = 2                         #number of parameters
min_boundary = [-10.,-10.]             #minimum boundary
max_boundary = [10.,10.]               #maximum boundary
length_scale = [1.0]                   #initial lengths scales for GP
cost_has_noise = True                  #whether cost function has noise
noise_level = 0.1                      #initial noise level
update_hyperparameters = True          #whether noise level and lengths scales are updated
trust_region = [5,5]                   #maximum move distance from best params
default_bad_cost = 10                  #default cost for bad run
default_bad_uncertainty = 1            #default uncertainty for bad run
learner_archive_filename = 'a_word'    #filename of gp archive
learner_archive_file_type = 'mat'      #file type of archive
predict_global_minima_at_end  = True   #find predicted global minima at end 
no_delay = True                       #whether to wait for the GP to make predictions or not. Default True (do not wait)  

#Training source options
training_type = 'random'               #training type can be random or nelder_mead
first_params = [1.9,-1.0]              #first parameters to try in initial training
gp_training_filename = None            #filename for training from previous experiment
gp_training_file_type  = 'pkl'         #training data file type

#if you use nelder_mead for the initial training source see the CompleteNelderMeadConfig.txt for options. 

Differential evolution

The differential evolution (DE) controller uses a Link DE algorithm for optimization. DE is a type of evolutionary algorithm, and is historically the most commonly used in automated optimization. DE will eventually find a global solution, however it can take many experiments before it does so.

There are two example files for the differential evolution controller: differential_evolution_simple_config.txt which contains the basic options.

#Differential Evolution Basic Options
#------------------------------------

#General options
max_num_runs = 500              #number of planned runs
target_cost = 0.1               #cost to beat

#Differential evolution controller options
controller_type = 'differential_evolution'
num_params = 1                  #number of parameters
min_boundary = [-4.8]           #minimum boundary
max_boundary = [10.0]           #maximum boundary
trust_region = 0.6              #maximum % move distance from best params
first_params = [5.3]            #first parameters to try

differential_evolution_complete_config.txt which contains a comprehensive list of options.

#Differential Evolution Complete Options
#---------------------------------------

#General options
max_num_runs = 500              #number of planned runs
target_cost = 0.1               #cost to beat

#Differential evolution controller options
controller_type = 'differential_evolution'
num_params = 2                  #number of parameters
min_boundary = [-1.2,-2]        #minimum boundary
max_boundary = [10.0,4]         #maximum boundary
trust_region = [3.2,3.1]        #maximum move distance from best params
first_params = None             #first parameters to try if None a random set of parameters is chosen
evolution_strategy='best2'              #evolution strategy can be 'best1', 'best2', 'rand1' and 'rand2'. Best uses the best point, rand uses a random one, the number indicates the number of directions added.
population_size=10                              #a multiplier for the population size of a generation
mutation_scale=(0.4, 1.1)               #the minimum and maximum value for the mutation scale factor. Each generation is randomly selected from this. Each value must be between 0 and 2.
cross_over_probability=0.8      #the probability a parameter will be resampled during a mutation in a new generation
restart_tolerance=0.02                  #the fraction the standard deviation in the costs of the population must reduce from the initial sample, before the search is restarted.

Nelder Mead

The Nelder Mead controller implements the Link Nelder-Mead method for optimization. You can control the starting point and size of the initial simplex of the method with the configuration file.

There are two example files for the Nelder-Mead controller: nelder_mead_simple_config.txt which contains the basic options.

#Nelder-Mead Basic Options
#-------------------------

#General options
max_num_runs = 100           #number of planned runs
target_cost = 0.1                #cost to beat

#Specific options
controller_type = 'nelder_mead'
num_params = 3                   #number of parameters
min_boundary = [-1,-1,-1]      #minimum boundary
max_boundary = [1,1,1]       #maximum boundary
initial_simplex_scale = 0.4      #initial size of simplex relative to the boundary size.

nelder_mead_complete_config.txt which contains a comprehensive list of options.

#Nelder-Mead Complete Options
#----------------------------

#General options
max_num_runs = 100                              #number of planned runs
target_cost = 0.1                               #cost to beat

#Specific options
controller_type = 'nelder_mead'
num_params = 5                                           #number of parameters
min_boundary = [-1.1,-1.2, -1.3, -1.4, -1.5]             #minimum boundary
max_boundary = [1.1, 1.1, 1.1, 1.1, 1.1]                 #maximum boundary
initial_simplex_corner= [-0.21,-0.23,-0.24,-0.23,-0.25]  #initial corner of the simplex
initial_simplex_displacements=[1,1,1,1,1]                #initial displacements for the N+1 (i this case 6) points of the simplex


Random

The random optimization algorithm picks parameters randomly from a uniform distribution from within the parameter bounds or trust region.

There are two example files for the random controller: random_simple_config.txt which contains the basic options.

#Random Basic Options
#--------------------

#General options
max_num_runs = 10           #number of planned runs

#Random controller options
controller_type = 'random'
num_params = 1                  #number of parameters
min_boundary = [1.2]            #minimum boundary
max_boundary = [10.0]           #maximum boundary
trust_region = 0.1              #maximum % move distance from best params
first_params = [5.3]            #first parameters to try

random_complete_config.txt which contains a comprehensive list of options.

#Random Complete Options
#-----------------------

#General options
max_num_runs = 20              #number of planned runs

#Random controller options
controller_type = 'random'
num_params = 2                      #number of parameters
min_boundary = [1.2,-2]             #minimum boundary
max_boundary = [10.0,4]            #maximum boundary
trust_region = [0.2,0.5]            #maximum move distance from best params
first_params = [5.1,-1.0]           #first parameters to try


Logging

You can control the filename of the logs and also the level which is reported to the file and the console. For more information see Link logging levels. The logging options are described in logging_config.txt.

#Logging Options
#--------------

log_filename = 'cl_run'              #Prefix for logging filename
file_log_level=logging.DEBUG        #Logging level saved in file 
console_log_level=logging.WARNING   #Logging level presented to console, normally INFO

Extras

Extras refers to options related to post processing your data once the optimization is complete. Currently the only extra option is for visualizations. The extra options are described in extras_config.txt.

#Extra Options
#-------------

visualizations=False                      #whether plots should be presented after run