Overview
Atlassian Sourcetree is a free Git and Mercurial client for Windows.
Atlassian Sourcetree is a free Git and Mercurial client for Mac.
Optimal Learning Matlab Examples
The scripts found in this repository are intended to illustrate the use of optimal learning. Currently, there are five examples:
 2Dphasediagramlearning.m
 global_learning_with_kg.m
 inverse_problem_simulations.m
 onsanger_phase_diagram_learning.m
 targeting_simulations.m
Explanations of the examples are found below.
Installing
Program was tested in Matlab R2017b on Windows 10. To use the code, clone or fork the repositories olmatlabexamples and olmatlabcore to a folder on your local machine. Add all folders and subfolders to the Matlab path. Examples in the repositories should now be able to run.
Contact
Admin: Prof. Kristofer Reyes (kris [at] csms [dot] io)
Team contact:
Aldair Gongora (agongora [at] bu [dot] edu)
Learning Onsanger Phase Diagrams
Code: onsanger_phase_diagram_learning.m
Problem Description:
In this example, 4 decision policies (Exploration, Max Variance, Knowledge Gradient, and InOrder) are used to obtain the input values of a target response value. The true response surface is the 2D Ising model:
$$f(T) = {{(1  (sinh(2 \beta J_1) sinh(2 \beta J_2))^{2})^{1 \over 8}}}$$
where,
$$ \beta = {1 \over {k_b T}} $$
The temperature domain being considered is $[0,2000] k$ and is discretized uniformly in $50$ discrete points. The kB value is $8.617330 e5$. The parameters $J_1$ and $J_2$ are unknown interaction energies. The parameters $J_1$ and $J_2$ are sampled uniformly in an interval $[0.01,0.05] eV$ to generate the truth. The truth samples are used to define the mean and covariance of the multivariate Gaussian prior. The target ('goal') is $f(T_c) = 0.5$.
Code Description:
The code for this example is broken down into $3$ parts:

generate_truth.m The following code evaluates the Ising function (shown above as the true response surface) using the function truth.m using two randomly generated ('selected') J values (unknown interaction energies) in the given domain [0.01,0.05] eV and a vector (1X50) of temperature values ranging from the minimum value (0) to the maximum value (2000) divided in 50 intervals.

truth.m  The following code evaluates the Ising function using input values of temperature (T) and unknown interaction energies (J1 and J2). The Ising function is shown above as the true response function).

main.m  Most of this code serves to set up the code parameters, iteration conditions, and the necessary anonymous functions. Specifically, the following code does the following:
 Sets a global kB value
 Sets simulation loop size (number of simulations, number of samples for priors, number of experiments, and number of mc steps)
 Sets noise levels ([1X3] vector)
 Discretizes temperature values in appropriate number of intervals
 Creates anonymous function for generating truth (response) values for the given temperature values and randomly selected (unknown) interaction energies)
 Creates anonymous functions for the Exploration, Max Variance, KG, and Inorder policies.
 Simulates the aforementioned policies
 Outputs truth, observations, mean ($\mu$), standard deviation ($\sigma$), choices, and prior samples
 Computes metrics for policy performance comparison
Results and Analysis:
The following images depict the results from running the script.
The three plots below depict the relative error versus number of experiments for three different noise levels. Global relative error is computed using the following equation:
$$ error = {abs({f^{star}f^n}) \over abs({f^{star}}) } $$
<p float="left">
<img src="images/global.sigmaw_0.101.jpg" width="250" height="200" />
<img src="images/global.sigmaw_0.251.jpg" width="250" height="200"/>
<img src="images/global.sigmaw_0.501.jpg" width="250" height="200" />
</p>
The three plots below depict the relative error versus number of experiments for three different noise levels. Local (selection) relative error is computed using the following equation, where C is the target response value.
$$ error = abs({{f^{star}C} \over {C} }) $$
<p float="left">
<img src="images/selection.sigmaw_0.101.jpg" width="250" height="200" />
<img src="images/selection.sigmaw_0.251.jpg" width="250" height="200"/>
<img src="images/selection.sigmaw_0.501.jpg" width="250" height="200" />
</p>
It is evident from the plots that the Knowledge Gradient (KG) policy performs better in the Selection error plots which show a good estimation in critical temperature. KG performs well because it focuses on learning specific features or quantities of interests, rather than learning the entire response function. If a response function can be broken down to lowdimensional features, KG can serve as a good global learner.