BAYES’ THEOREM AND STATISTICS

Bayes’ Theorem defines the conditional probability, the probability of the event A, given the event B: P(A/B) P(B) = P(B/A) P(A). Here, P(A) and P(B) are the uncon­ditional, or a priori, probabilities of events A and B, respectively. This is a funda­mental theorem in probability theory. Bayes’ theorem allows new information to be used to update the conditional probability of an event. It refers to repeatable measurements (as is done in the frequency-based interpretation of probability), and the interpretation of data can be described by Bayes’ theorem. In that case A is a hypothesis and B is the experimental data. The meanings of various terms are (1) P(AB) is the degree of belief in the hypothesis A, after the experiment that produced data B, (2) P(A) is the prior probability of A being true, (3) P(BA) is the ordinary likelihood function used also by non-Bayesians believers, and P(B) is the prior probability of obtaining data B.

B4 CENTRAL LIMIT PROPERTY

Let a collection of random variables, which are distributed individually according to some different probability distributions, be represented as z = x1 + x2 + ••• + xn; then the central limit theorem states that the random variable z is approximately Gaussian (normally) distributed if n n and z has finite expectation and variance. This property allows a general assumption that noise processes are Gaussian, since we can say that these processes have arisen due to the sum of individual processes with different distributions.