Bayesian inference: Introduction

On the notations

Consider the following function defined as:

f:A\times \Theta\to B \\f:(x;\theta) \mapsto y\quad \text{i.e.} \ y:=f(x;\theta)

The set A is the domain of the function and x is the argument of it. The set \Theta is the parameter space and the parameters theta‘s of the function belongs to that set. For example f(x,y;\alpha,\beta):=\alpha cos(x^y/\beta)\ \text{ s.t } (\alpha,\beta)\in \R^2 has two parameters. The following notation is sometimes used to differentiate between the argument and parameter variables of a function:

y:=f(x|\theta)

it reads “f of x given \theta.

In the probability theory, the concept of conditional probability also uses the same notation, i.e. P(A|B) that reads the probability of the event A given that the event B has occurred; or, the probability of -the event- A conditional on -the event- B. In the case of random variables/vectors, we write P(Y\in R|X\in S).

The definition of the probability density/mass functions also uses the aforementioned notation. Consider a random variable/vector X with a probability density function (pdf) having some parameters. Then the pdf is written as:

f_X(x|\theta)

This should be understood from the context that \theta is not a random variable/vector and hence the above does not imply a conditional pfd. As \theta pertains to a probability distribution (model), another letter is sometimes added to the above notation in order to include the model’s type or some information on the model :

f_X(x|\theta,M)

A conditional pdf is identified with the same notation. Considering two random variables/vectors X and Y, we write:

f_{X|Y=y}(x|y)\equiv f_{X|Y}(x|y)\equiv f(x|y):=\frac{f_{X,Y}(x,y)}{f_Y(y)}\\ \ \\ f_{X|Y}(x|y):\R^m\times\R^n \to \R^+

which is a function of x if y is already fixed. This reads the pdf of X given -the observed and fixed value of- Y. Note that the notation f_{X|Y}(x|y) is just a function notation.

A conditional pdf (as a result of normalizing a joint pdf which has parameters) has parameters; if the parameters of the conditional pdf are collected in a vector and denoted by the vector \theta, the conditional pdf can be represented as:

f_{X|Y}(x|y,\theta,M)\equiv f(x|y,\theta,M)

such that it is a function of x for a fixed value of the random variable y and fixed values of the parameters \theta. So, the notation “|” is used to indicate both the parameters of a function and conditions on/of random variables involved in a pdf. Whatever precedes “|” is (are) the function argument(s). Henceforward, I use the notation “|” for both of the purposes and the context will imply the meaning.

Parametric inference

For a continuous random vector X\in R\sub \R^n, a continuous joint pdf f_X(x) can be defined as:

f:R\to\R^{+}\\P[x\in A]=\int_A f_X(x|\theta) dA

where \theta \in D_\theta is a set of parameters involved in the definition of the pdf.

The form f_X(x;\theta) can also be interpreted as a function of \theta at a constant x:

L:D_\theta\to \R^{+}\\ L(\theta)\equiv L_x(\theta)\equiv L(\theta|x):=f_X(x|\theta)\tag{1}

This function is called the likelihood function. The notation L(\theta |x) reads the likelihood of \theta given the (observed) data x. Because \theta is not a random variable/vector, the concept of conditional pdf is irrelevant. This is a function of the parameter \theta, not x, hence, it is not a density function. Depending on the variable in consideration, the function f_X(x;\theta) can be interpreted as either the density function or the likelihood function. It should be noted that the parameter(s) belongs to a model; this is usually implied implicitly. For example the pdf of a normally distributed random variable has two parameters \mu and \sigma^2.

Usually, the parameters of a probability distribution are unknown. Therefore, there is uncertainty about the parameters. The parameters can then be collected in a random vector \Theta. Now both the data and the parameters are random vectors (or variables). A joint pdf can be imagined for the data and the parameters of the pdf of the data:

f_{X,\Theta}(x,\theta)\equiv f_{X,\Theta}(x,\theta|M)

The notation |M reads “given a/the model M“. This notation is used merely to convey the fact that \theta pertains to a model and also to express extra information about the model; indeed, it does not indicate a (quantitative) conditional on M.

Consequently, the following conditional pdf of \Theta given the data X=x can be written as:

f_{\Theta|X=x}(\theta|x)\equiv f_{\Theta|X}(\theta|x)\equiv f(\theta|x):=\frac{f_{X,\Theta}(x,\theta)}{f_X(x)}

For a given x, i.e. data, the above is a function of \theta. Repeating the conditional pdf law, we can write:

f_{\Theta|X}(\theta|x)=\frac{f_{X|\Theta}(x|\theta)f_\Theta(\theta)}{f_X(x)}\\ \text{or} \\ \ \tag{2}\\ f_{\Theta|X}(\theta|x,M)=\frac{f_{X|\Theta}(x|\theta,M)f_\Theta(\theta|M)}{f_X(x|M)}

M indicates that a model (joint distribution of data and parameters) is already assumed; it can be an implicit assumption. Also, any other background information about the model (e.g. the range of the parameters) is integrated into M. The terms are named and explained as:

1) The likelihood: f_{X|\Theta}(x|\theta) is called sampling distribution. This is the distribution (of the observed) data conditional on the parameters. In other words, it tells us the pdf value of the data (measured) given some values of the parameters. This term is the pdf of data, x, conditioned on \theta, however, once the data have been observed/measured/given and hence becomes a fixed variable, the term becomes a function of the parameters (\theta). Therefore, this term is referred to as the likelihood of the data. There is a subtle point here. The term is naturally a pdf in x and hence can be imagined as:

h_X(x|\theta):=f_{X|\Theta}(x|\theta)=\frac{f_{X,\Theta}(x,\theta)}{f_{\Theta}(\theta)}

Therefore, h_X(x;\theta) becomes a likelihood function if considered as a function of \theta, i.e. L(θ∣x):=h_X​(x|θ) (once the data have been observed/measured, it automatically becomes a function of the parameters). This likelihood function is the key function describing both the phenomenon (expressed by the model) and the measurements. Sometimes it is referred to as the likelihood of the parameters; but it is not really a suitable name because the likelihood uses the same function form identical to the conditional pdf in data. Moreover, the unit of a f_{X|\Theta}(x|\theta) is D^{-1} where D is the unit of data. This is because the unit of f_{\Theta}(\theta) is \frac{1}{M} and the unit of f_{X,\Theta}(x,\theta) is \frac{1}{D}\frac{1}{M}.

2) The prior: f_\Theta(\theta) is called the prior distribution of \Theta. This expresses the distribution of \Theta before observing the data (regardless of data). It encapsulates the information we have or assumed about the possible values of the model parameters regardless of the data.

3) The evidence: f_X(x) is the distribution (marginal pdf) of data. In mathematical notation: f_X(x|M)= \int_{\text{all values of }\theta} f_{\Theta,X}(\theta,x|M)d\theta. The evidence is not a function of the parameters, therefore, it is just a normalization constant for the posterior.

4) The posterior: f_{\Theta|X}(\theta|x) is called the posterior distribution (pdf) of \Theta. This is the pdf of the parameters \Theta (function of the parameters) given the model information and (observed) data.

Noting and keeping in mind that each of the different pdf’s in Eq. 2 is a function of the variable preceding “|” and pertaining to the pdf of the function’s argument, we usually write the equation as (f is replaced with p):

p(\theta|x,M)=\frac{\textcolor{blue}p(x|\theta,M)\textcolor{LimeGreen}p(\theta|M)}{\textcolor{red}p(x|M)}\\ \ \\ \text{conditional pdf}=\frac{\text{conditional pdf}\times \text{marginal pdf}}{\text{marginal pdf}} \tag{3}

By the aforementioned notation, the term(s) after “|” conveys that the pdf is a conditional pdf if the term(s) pertains (be a value of) a random vector/variable. E.g: p(x|\theta,M)\equiv p_{X|\Theta}(x|\theta,M).

For given/measured data (fixed), the numerator of Eq. 3 is a function of the parameters while the denominator is a constant. Therefore, the following representations are defined and usually used in inference:

p(\theta|x,M)=\frac{1}{Z}p(x|\theta,M)p(\theta|M)\\ \ \\ p^*(\theta|x,M):=p(x|\theta,M)p(\theta|M)\tag{4}

Application: Data modeling with parametric models

A generative model (also called the forward model) is the theoretical agent (usually a mathematical equation) that simulates/generates/predicts the observable data; the model is identified by its form (equation) and its parameters. A general form of a non-stochastic or deterministic generative model can be written as:

f:\R^m \to\R^n\\ y_s:=f(x|\theta_{f})\equiv f(x;\theta_{f})

where y_s is the simulated data/measurements, x is the index at which the data is measured (e.g time, point at space, etc), and \theta_{f} \in \R^k contains the parameters of the function (model). A stochastic model possesses some randomness; unlike a deterministic model that produces the same output for the same parameters (and initial conditions), a stochastic model produces different outputs. Stochastic models predict distribution(s) over the outputs.

A generative model predicts the data in the absence of measurements noise i.e. y\approx y_s for measurements y. It means that the models gives the true value while the measurements are polluted with noise introducing uncertainty into the measurements. We should distinguish between the true/ideal data and the noisy data/measurements. Noisy data (out of measurements) is what we usually have. The model form and its parameters are always determined based on the observed noisy data/measurements. Therefore, we should know how the measurement process influences the true data/values. To this end, a measurement model also called a noise model is also to be defined. This model contains the uncertainty/noise in the measured data. The measurement model can then be written as y=y_s+\epsilon. The variable \epsilon is actually a random variable/vector introducing the noise.Consequently, the measurement, y becomes a random variable/vector as well. Having said that and using capital or bold letters for random vectors/variable and small or non-bold letters for non-random variables, we write:

Y=y_s+\boldsymbol {\epsilon} \quad \in \Omega \to \R^n\\ \text{ s.t } y_s:=f(x|\theta_{f})\quad \text{and }\ \boldsymbol \epsilon:=\epsilon(\omega):\Omega\to \R^n\tag{5}

where \Omega is a probability sample space. Note that f(x|\theta_{f}) is a function of x not a pdf. \bold y is inherently a stochastic process/random field, i.e. at each x (index), a random variable exits. The noise, \epsilon has a distribution and indeed a pdf, p_{\boldsymbol \epsilon}(\epsilon|\theta_{\epsilon}), with its own parameters \theta_{\epsilon}. In such a model, the noise average is set to be zero, i.e. E[\epsilon]=0, therefore, the mean of a measurement at each point is equal to the ideal data. This is simply because the simulating part is not a random vector, hence, E[Y]=f(x;\theta_{f})+0.

Because y_s is not a random variable/vector, the probability distribution of Y and of the noise (model) are the same with different means. In such a data modeling, the noise distribution is assumed (known) and the simulating function parameters are sought. This approach is not Bayesian.

A Bayesian approach to data modeling arises when the parameters of the generative model (simulating function) are also considered to be random variables, \Theta_f; this means there is uncertainty introduced into the parameters. hence, a probability model is designated to the parameters. The data model can now be written as:

Y=Y_s+\boldsymbol {\epsilon} \quad \in \Omega \to R^n\\ \ \\ \text{ s.t }\quad Y_s:=f(x|\Theta_{f})\equiv f(x;\Theta_{f}) \quad \text{and }\ \boldsymbol \epsilon:=\epsilon(\omega):\Omega\to \R^n\tag{6}

where x indexes Y_s. Now the measurement model is the result of sum of the two random variables/vectors, i.e. a function of a random vector/variable, and the noise. The random variable/vector \Theta_f has a pdf with its own parameters:

p_{\Theta_f}(\theta_f|w,M)\tag{7}

where w is a vector of parameters. The assumption on the probability distribution of the parameters of a generative model enforces a prior on the model’s parameters. This prior does not depend on the data or the noise.

The probability distribution of noise is usually/initially assumed. What remains unknown are the probability distributions of Y and \Theta_{f}. In this regard, the joint probability distribution and hence the joint pdf of these two unknown random variables/vectors can be considered at each index x:

p_{Y,\Theta_{f}}(y,\theta_{f}|w,\theta_{\epsilon},M) \quad \text{or}\tag{8}\\ \ \\ p_{Y,\Theta_{f}}(y,\theta_{f}|x,w,\theta_{\epsilon},M)\quad \text{or}\\ \ \\ p_{Y,\Theta_{f}}(y(x),\theta_{f}|w,\theta_{\epsilon},M)

Using the Eq. 3 (Bayes’ theorem), we write:

p(\theta_f|y,M)=\frac{p(y|\theta_f,M)p(\theta_f|M)}{p(y|M)}\tag{9}

where the assumed/preset parameters (in Eq. 8) are integrated into the model information.

The term p(y|\theta_f,M), namely the likelihood of data when observed, can be written in terms of the noise pdf if the noise random vector/variable is assumed to be independent of the parameters. It is usually the case. Denoting the cumulative distribution function by F_X(.), and considering the eq. 6 at one fixed indexing point x, we can write:

\begin{aligned} Y=Y_s+\boldsymbol \epsilon\quad \text{s.t}\quad Y_s=g(\Theta),\quad \Theta\equiv\Theta_f, \quad \Theta\text{ and } \boldsymbol \epsilon\text{ are independent} \\ \ \\ F_{Y|\Theta}(y|\Theta=\theta)=P(Y \le y|\Theta=\theta)=P(g(\Theta)+\boldsymbol \epsilon \le y|\Theta=\theta)=\\ \ \\ \frac{P(g(\Theta)+\boldsymbol \epsilon \le y,\Theta=\theta)}{P(\Theta=\theta)}=\frac{P(\boldsymbol \epsilon \le y-g(\theta),\Theta=\theta)}{P(\Theta=\theta)}=\frac{P(\boldsymbol \epsilon \le y-g(\theta))P(\Theta=\theta)}{P(\Theta=\theta)}=\\ \ \\ P(\boldsymbol \epsilon \le y-g(\theta))=F_{\boldsymbol \epsilon}(y-g(\theta)) \Rightarrow\\ \ \\ p_{Y|\Theta}(y|\theta)\equiv p(y|\theta)=p_{\boldsymbol \epsilon}(y-g(\theta))\\ \ \\ \text{Resuming the notations and considering the index x:} \\ \ \\ p(y|\theta_f,x,w,\theta_{\epsilon},M)=p_{\boldsymbol \epsilon}(y-f(x;\theta_f)) \end{aligned} \tag{10}

It is worthy of note that if the parameter vector \Theta_f is not a random vector (as used in the model in Eq. 5), but just a deterministic parameter, denoted by \theta_f, then at a particular point indexed by x, we have Y=g(\theta_f)+\boldsymbol \epsilon. This Y different from the random vector Y=Y_s+\boldsymbol \epsilon, therefore we denote it by Y^*. For a deterministic \theta_f, we have p_{\boldsymbol \epsilon}(y^*-g(\theta)), because:

\begin{aligned} Y^*=g(\theta)+\boldsymbol \epsilon \Rightarrow F_{Y^*}(y^*)=P(g(\theta)+\boldsymbol \epsilon\le y^*)=P(\boldsymbol \epsilon \le y^*-g(\theta))=F_{\boldsymbol \epsilon} (y^*-g(\theta)) \\ \ \\ \text{implying}\quad p_{Y^*}(y^*)=p_{\boldsymbol \epsilon}(y^*-g(\theta)) \end{aligned} \tag{11}

As \theta becomes the parameter of the function p_{Y^*}(y^*), we can write p_{Y^*}(y^*|\theta)= p_{\boldsymbol \epsilon}(y^*-g(\theta)). Comparing with the case of probabilistic parameter, we can say that for a given value of \Theta, the random vector Y becomes Y^*, i.e Y|\Theta=\theta with the the pdf as in Eq. 11. In other words,

\Theta=\theta\quad\Rightarrow\quad Y|(\Theta=\theta):=Y^*\\ \ \\ p_{Y|\Theta}(y|\theta)=p_{Y*}(y^*)=p_{\boldsymbol \epsilon}(y^*-g(\theta))

Note that for any two random vectors X and Y, the term Y|X=x (or vice versa) is another random vector like Z:=Y|X=x with the support \{z=(s,y):s=x,\text{ and }, y\in \text{support of Y}\}.

In any of the approaches on the parameters, Y or [/latex]Y^*[/latex] are to model the data. Therefore, sample of observed values of data are substituted for their values in order to find the model parameters.

It is common to assume a (multivariate) Gaussian distribution for the noise, \mathscr N(0,\Sigma_{\epsilon \epsilon}). The noise distribution now has a single parameter \Sigma_{\epsilon \epsilon} . This implies that the covariance matrix of noise is diagonal. Also, it is assumed that the noises (the random vector) of all measurements (a vector of measurements indexed by x) are iid, meaning that they have the same variance, i.e. identically distributed, and probabilisticly independent of each other, i.e. independent. In this regard (for y or y^*):

p_{\boldsymbol \epsilon}(y-g(\theta))=\frac{1}{(2\pi)^{n/2}|\Sigma_{\epsilon \epsilon}|^{\frac{1}{2}}}\exp(-\frac{1}{2}(y-f(x;\theta_{f}))^\text T \Sigma_{\epsilon \epsilon}^{-1}(y-f(x;\theta_{f}))

and with the iid assumption for the noise, its covariance matrix becomes an diagonal and isotropic having \sigma_{\epsilon}^2 as its diagonal elements:

p_{\boldsymbol \epsilon}(y-g(\theta))=\frac{1}{(2\pi \sigma_{\epsilon}^2)^{n/2} }\exp(-\frac{1}{2\sigma_{\epsilon}^2}(y-f(x;\theta_{f}))^\text T (y-f(x;\theta_{f}))

One of the simplest models is a straight line; it simulates 1D data, like temperatures measured at different moments. The model is

\mathscr N(0,\Sigma_{\epsilon \epsilon})

To be reviewed and continued