
"Natural parameter" links here. For the usage of this term in differential geometry, see differential geometry of curves.
In G. Darmois,^{[3]} and B. O. Koopman^{[4]} in 1935–36. The term exponential class is sometimes used in place of "exponential family".^{[5]}
The exponential families include many of the most common distributions, including the normal, exponential, gamma, chisquared, beta, Dirichlet, Bernoulli, categorical, Poisson, Wishart, Inverse Wishart and many others. A number of common distributions are exponential families only when certain parameters are considered fixed and known, e.g. binomial (with fixed number of trials), multinomial (with fixed number of trials), and negative binomial (with fixed number of failures). Examples of common distributions that are not exponential families are Student's t, most mixture distributions, and even the family of uniform distributions with unknown bounds. See the section below on examples for more discussion.
Consideration of exponentialfamily distributions provides a general framework for selecting a possible alternative parameterisation of the distribution, in terms of natural parameters, and for defining useful sample statistics, called the natural sufficient statistics of the family. For more information, see below.
Contents

Definition 1

Scalar parameter 1.1

Factorization of the variables involved 1.2

Vector parameter 1.3

Vector parameter, vector variable 1.4

Measuretheoretic formulation 1.5

Interpretation 2

Properties 3

Examples 4

Normal distribution: Unknown mean, known variance 4.1

Normal distribution: Unknown mean and unknown variance 4.2

Binomial distribution 4.3

Table of distributions 5

Moments and cumulants of the sufficient statistic 6

Normalization of the distribution 6.1

Moment generating function of the sufficient statistic 6.2

Differential identities for cumulants 6.2.1

Example 1 6.2.2

Example 2 6.2.3

Example 3 6.2.4

Maximum entropy derivation 7

Role in statistics 8

Classical estimation: sufficiency 8.1

Bayesian estimation: conjugate distributions 8.2

Hypothesis testing: Uniformly most powerful tests 8.3

Generalized linear models 8.4

See also 9

References 10

Further reading 11

External links 12
Definition
The following is a sequence of increasingly more general definitions of an exponential family. A casual reader may wish to restrict attention to the first and simplest definition, which corresponds to a singleparameter family of discrete or continuous probability distributions.
Scalar parameter
A singleparameter exponential family is a set of probability distributions whose probability density function (or probability mass function, for the case of a discrete distribution) can be expressed in the form

f_X(x\mid\theta) = h(x) \exp \left (\eta(\theta) \cdot T(x) A(\theta)\right )
where T(x), h(x), η(θ), and A(θ) are known functions.
An alternative, equivalent form often given is

f_X(x\mid\theta) = h(x) g(\theta) \exp \left ( \eta(\theta) \cdot T(x) \right )
or equivalently

f_X(x\mid\theta) = \exp \left (\eta(\theta) \cdot T(x)  A(\theta) + B(x) \right )
The value θ is called the parameter of the family.
Note that x is often a vector of measurements, in which case T(x) may be a function from the space of possible values of x to the real numbers. More generally, η(θ) and T(x) can each be vectorvalued such that \eta(\theta)'\cdot T(x) is realvalued.
If η(θ) = θ, then the exponential family is said to be in canonical form. By defining a transformed parameter η = η(θ), it is always possible to convert an exponential family to canonical form. The canonical form is nonunique, since η(θ) can be multiplied by any nonzero constant, provided that T(x) is multiplied by that constant's reciprocal, or a constant c can be added to η(θ) and h(x) multiplied by \exp (c \cdot T(x)) to offset it.
Even when x is a scalar, and there is only a single parameter, the functions η(θ) and T(x) can still be vectors, as described below.
Note also that the function A(θ) or equivalently g(θ) is automatically determined once the other functions have been chosen, and assumes a form that causes the distribution to be normalized (sum or integrate to one over the entire domain). Furthermore, both of these functions can always be written as functions of η, even when η(θ) is not a onetoone function, i.e. two or more different values of θ map to the same value of η(θ), and hence η(θ) cannot be inverted. In such a case, all values of θ mapping to the same η(θ) will also have the same value for A(θ) and g(θ).
Further down the page is the example of a normal distribution with unknown mean and known variance.
Factorization of the variables involved
What is important to note, and what characterizes all exponential family variants, is that the parameter(s) and the observation variable(s) must factorize (can be separated into products each of which involves only one type of variable), either directly or within either part (the base or exponent) of an exponentiation operation. Generally, this means that all of the factors constituting the density or mass function must be of one of the following forms:

f(x), g(\theta), c^{f(x)}, c^{g(\theta)}, ^c, ^c, ^{g(\theta)}, ^{f(x)}, ^{h(x)g(\theta)}, \text{ or } ^{h(x)j(\theta)},
where f and h are arbitrary functions of x; g and j are arbitrary functions of θ; and c is an arbitrary "constant" expression (i.e. an expression not involving x or θ).
There are further restrictions on how many such factors can occur. For example, the two expressions:

^{h(x)j(\theta)}, \qquad ^{h(x)j(\theta)} [g(\theta)]^{h(x)j(\theta)},
are the same, i.e. a product of two "allowed" factors. However, when rewritten into the factorized form,

^{h(x)j(\theta)} = ^{h(x)j(\theta)} [g(\theta)]^{h(x)j(\theta)} = e^,
it can be seen that it cannot be expressed in the required form. (However, a form of this sort is a member of a curved exponential family, which allows multiple factorized terms in the exponent.)
To see why an expression of the form

^{g(\theta)}
qualifies, note that

^{g(\theta)} = e^{g(\theta) \ln f(x)}
and hence factorizes inside of the exponent. Similarly,

^{h(x)g(\theta)} = e^{h(x)g(\theta)\ln f(x)} = e^ e^{\frac{(x\mu)^2}{2\sigma^2}}.
This is a singleparameter exponential family, as can be seen by setting

\begin{align} h_\sigma(x) &= \frac{1}{\sqrt{2\pi\sigma^2}}e^{\frac{x^2}{2\sigma^2}} \\ T_\sigma(x) &= \frac{x}{\sigma} \\ A_\sigma(\mu) &= \frac{\mu^2}{2\sigma^2}\\ \eta_\sigma(\mu) &= \frac{\mu}{\sigma}. \end{align}
If σ = 1 this is in canonical form, as then η(μ) = μ.
Normal distribution: Unknown mean and unknown variance
Next, consider the case of a normal distribution with unknown mean and unknown variance. The probability density function is then

f(x;\mu,\sigma) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^{\frac{(x\mu)^2}{2 \sigma^2}}.
This is an exponential family which can be written in canonical form by defining

\begin{align} \boldsymbol {\eta} &= \left(\frac{\mu}{\sigma^2}, \frac{1}{2\sigma^2} \right)^{\rm T} \\ h(x) &= \frac{1}{\sqrt{2 \pi}} \\ T(x) &= \left( x, x^2 \right)^{\rm T} \\ A({\boldsymbol \eta}) &= \frac{\mu^2}{2 \sigma^2} + \ln \sigma = \frac{\eta_1^2}{4\eta_2} + \frac{1}{2}\ln\left\frac{1}{2\eta_2} \right \end{align}
Binomial distribution
As an example of a discrete exponential family, consider the binomial distribution with known number of trials n. The probability mass function for this distribution is

f(x)={n \choose x}p^x (1p)^{nx}, \quad x \in \{0, 1, 2, \ldots, n\}.
This can equivalently be written as

f(x)={n \choose x}\exp\left(x \log\left(\frac{p}{1p}\right) + n \log(1p)\right),
which shows that the binomial distribution is an exponential family, whose natural parameter is

\eta = \log\frac{p}{1p}.
This function of p is known as logit.
Table of distributions
The following table shows how to rewrite a number of common distributions as exponentialfamily distributions with natural parameters. Refer to the flashcards^{[6]} for main exponential families.
For a scalar variable and scalar parameter, the form is as follows:

f_X(\mathbf{x}\mid\boldsymbol \theta) = h(\mathbf{x}) \exp\Big(\boldsymbol\eta({\boldsymbol \theta}) \cdot \mathbf{T}(\mathbf{x})  A({\boldsymbol \eta})\Big)
For a scalar variable and vector parameter:

f_X(x\mid\boldsymbol \theta) = h(x) \exp\Big(\boldsymbol\eta({\boldsymbol \theta}) \cdot \mathbf{T}(x)  A({\boldsymbol \eta})\Big)

f_X(x\mid\boldsymbol \theta) = h(x) g(\boldsymbol \theta) \exp\Big(\boldsymbol\eta({\boldsymbol \theta}) \cdot \mathbf{T}(x)\Big)
For a vector variable and vector parameter:

f_X(\mathbf{x}\mid\boldsymbol \theta) = h(\mathbf{x}) \exp\Big(\boldsymbol\eta({\boldsymbol \theta}) \cdot \mathbf{T}(\mathbf{x})  A({\boldsymbol \eta})\Big)
The above formulas choose the functional form of the exponentialfamily with a logpartition function A({\boldsymbol \eta}). The reason for this is so that the moments of the sufficient statistics can be calculated easily, simply by differentiating this function. Alternative forms involve either parameterizing this function in terms of the normal parameter \boldsymbol\theta instead of the natural parameter, and/or using a factor g(\boldsymbol\eta) outside of the exponential. The relation between the latter and the former is:

A(\boldsymbol\eta) = \ln g(\boldsymbol\eta)

g(\boldsymbol\eta) = e^{A(\boldsymbol\eta)}
To convert between the representations involving the two types of parameter, use the formulas below for writing one type of parameter in terms of the other.
Distribution

Parameter(s)

Natural parameter(s)

Inverse parameter mapping

Base measure h(x)

Sufficient statistic T(x)

Logpartition A(\boldsymbol\eta)

Logpartition A(\boldsymbol\theta)

Bernoulli distribution

p

\ln\frac{p}{1p}

\frac{1}{1+e^{\eta}} = \frac{e^\eta}{1+e^{\eta}}

1

x

\ln (1+e^{\eta})

\ln (1p)

binomial distribution
with known number of trials n

p

\ln\frac{p}{1p}

\frac{1}{1+e^{\eta}} = \frac{e^\eta}{1+e^{\eta}}

{n \choose x}

x

n \ln (1+e^{\eta})

n \ln (1p)

Poisson distribution

λ

\ln\lambda

e^\eta

\frac{1}{x!}

x

e^{\eta}

\lambda

negative binomial distribution
with known number of failures r

p

\ln p

e^\eta

{x+r1 \choose x}

x

r \ln (1e^{\eta})

r \ln (1p)

exponential distribution

λ

\lambda

\eta

1

x

\ln(\eta)

\ln\lambda

Pareto distribution
with known minimum value x_{m}

α

\alpha1

1\eta

1

\ln x

\ln (1\eta) + (1+\eta) \ln x_{\mathrm m}

\ln \alpha  \alpha \ln x_{\mathrm m}

Weibull distribution
with known shape k

λ

\frac{1}{\lambda^k}

(\eta)^{\frac{1}{k}}

x^{k1}

x^k

\ln(\eta) \ln k

k\ln\lambda \ln k

Laplace distribution
with known mean μ

b

\frac{1}{b}

\frac{1}{\eta}

1

x\mu

\ln\left(\frac{2}{\eta}\right)

\ln 2b

chisquared distribution

ν

\frac{\nu}{2}1

2(\eta+1)

e^{\frac{x}{2}}

\ln x

\ln \Gamma(\eta+1)+(\eta+1)\ln 2

\ln \Gamma\left(\frac{\nu}{2}\right)+\frac{\nu}{2}\ln 2

normal distribution
known variance

μ

\frac{\mu}{\sigma}

\sigma\eta

\frac{e^{\frac{x^2}{2\sigma^2}}}{\sqrt{2\pi}\sigma}

\frac{x}{\sigma}

\frac{\eta^2}{2}

\frac{\mu^2}{2\sigma^2}

normal distribution

μ,σ^{2}

\begin{bmatrix} \dfrac{\mu}{\sigma^2} \\[10pt] \dfrac{1}{2\sigma^2} \end{bmatrix}

\begin{bmatrix} \dfrac{\eta_1}{2\eta_2} \\[15pt] \dfrac{1}{2\eta_2} \end{bmatrix}

\frac{1}{\sqrt{2\pi}}

\begin{bmatrix} x \\ x^2 \end{bmatrix}

\frac{\eta_1^2}{4\eta_2}  \frac12\ln(2\eta_2)

\frac{\mu^2}{2\sigma^2} + \ln \sigma

lognormal distribution

μ,σ^{2}

\begin{bmatrix} \dfrac{\mu}{\sigma^2} \\[10pt] \dfrac{1}{2\sigma^2} \end{bmatrix}

\begin{bmatrix} \dfrac{\eta_1}{2\eta_2} \\[15pt] \dfrac{1}{2\eta_2} \end{bmatrix}

\frac{1}{\sqrt{2\pi}x}

\begin{bmatrix} \ln x \\ (\ln x)^2 \end{bmatrix}

\frac{\eta_1^2}{4\eta_2}  \frac12\ln(2\eta_2)

\frac{\mu^2}{2\sigma^2} + \ln \sigma

inverse Gaussian distribution

μ,λ

\begin{bmatrix} \dfrac{\lambda}{2\mu^2} \\[15pt] \dfrac{\lambda}{2} \end{bmatrix}

\begin{bmatrix} \sqrt{\dfrac{\eta_2}{\eta_1}} \\[15pt] 2\eta_2 \end{bmatrix}

\frac{1}{\sqrt{2\pi}x^{\frac{3}{2}}}

\begin{bmatrix} x \\[5pt] \dfrac{1}{x} \end{bmatrix}

2\sqrt{\eta_1\eta_2} \frac12\ln(2\eta_2)

\frac{\lambda}{\mu} \frac12\ln\lambda

gamma distribution

α,β

\begin{bmatrix} \alpha1 \\ \beta \end{bmatrix}

\begin{bmatrix} \eta_1+1 \\ \eta_2 \end{bmatrix}

1

\begin{bmatrix} \ln x \\ x \end{bmatrix}

\ln \Gamma(\eta_1+1)(\eta_1+1)\ln(\eta_2)

\ln \Gamma(\alpha)\alpha\ln\beta

k, θ

\begin{bmatrix} k1 \\[5pt] \dfrac{1}{\theta} \end{bmatrix}

\begin{bmatrix} \eta_1+1 \\[5pt] \dfrac{1}{\eta_2} \end{bmatrix}

\ln \Gamma(k)+k\ln\theta

inverse gamma distribution

α,β

\begin{bmatrix} \alpha1 \\ \beta \end{bmatrix}

\begin{bmatrix} \eta_11 \\ \eta_2 \end{bmatrix}

1

\begin{bmatrix} \ln x \\ \frac{1}{x} \end{bmatrix}

\ln \Gamma(\eta_11)(\eta_11)\ln(\eta_2)

\ln \Gamma(\alpha)\alpha\ln\beta

scaled inverse chisquared distribution

ν,σ^{2}

\begin{bmatrix} \dfrac{\nu}{2}1 \\[10pt] \dfrac{\nu\sigma^2}{2} \end{bmatrix}

\begin{bmatrix} 2(\eta_1+1) \\[10pt] \dfrac{\eta_2}{\eta_1+1} \end{bmatrix}

1

\begin{bmatrix} \ln x \\ \frac{1}{x} \end{bmatrix}

\ln \Gamma(\eta_11)(\eta_11)\ln(\eta_2)

\ln \Gamma\left(\frac{\nu}{2}\right)\frac{\nu}{2}\ln\frac{\nu\sigma^2}{2}

beta distribution

α,β

\begin{bmatrix} \alpha  1 \\ \beta  1 \end{bmatrix}

\begin{bmatrix} \eta_1 + 1 \\ \eta_2 + 1 \end{bmatrix}

1

\begin{bmatrix} \ln x \\ \ln (1x) \end{bmatrix}

\ln \Gamma(\eta_1) + \ln \Gamma(\eta_2)  \ln \Gamma(\eta_1+\eta_2)

\ln \Gamma(\alpha) + \ln \Gamma(\beta)  \ln \Gamma(\alpha+\beta)

multivariate normal distribution

μ,Σ

\begin{bmatrix} \boldsymbol\Sigma^{1}\boldsymbol\mu \\[5pt] \frac12\boldsymbol\Sigma^{1} \end{bmatrix}

\begin{bmatrix} \frac12\boldsymbol\eta_2^{1}\boldsymbol\eta_1 \\[5pt] \frac12\boldsymbol\eta_2^{1} \end{bmatrix}

(2\pi)^{\frac{k}{2}}

\begin{bmatrix} \mathbf{x} \\[5pt] \mathbf{x}\mathbf{x}^\mathrm{T} \end{bmatrix}

\frac{1}{4}\boldsymbol\eta_1^{\rm T}\boldsymbol\eta_2^{1}\boldsymbol\eta_1  \frac12\ln\left2\boldsymbol\eta_2\right

\frac12\boldsymbol\mu^{\rm T}\boldsymbol\Sigma^{1}\boldsymbol\mu + \frac12 \ln \boldsymbol\Sigma

categorical distribution (variant 1)

p_{1},...,p_{k}
where \textstyle\sum_{i=1}^k p_i=1

\begin{bmatrix} \ln p_1 \\ \vdots \\ \ln p_k \end{bmatrix}

\begin{bmatrix} e^{\eta_1} \\ \vdots \\ e^{\eta_k} \end{bmatrix}
where \textstyle\sum_{i=1}^k e^{\eta_i}=1

1

\begin{bmatrix} [x=1] \\ \vdots \\ \end{bmatrix}

0

0

categorical distribution (variant 2)

p_{1},...,p_{k}
where \textstyle\sum_{i=1}^k p_i=1

\begin{bmatrix} \ln p_1+C \\ \vdots \\ \ln p_k+C \end{bmatrix}

\begin{bmatrix} \dfrac{1}{C}e^{\eta_1} \\ \vdots \\ \dfrac{1}{C}e^{\eta_k} \end{bmatrix} =
\begin{bmatrix} \dfrac{e^{\eta_1}}{\sum_{i=1}^{k}e^{\eta_i}} \\[10pt] \vdots \\[5pt] \dfrac{e^{\eta_k}}{\sum_{i=1}^{k}e^{\eta_i}} \end{bmatrix}
where \textstyle\sum_{i=1}^k e^{\eta_i}=C

1

\begin{bmatrix} [x=1] \\ \vdots \\ \end{bmatrix}

0

0

categorical distribution (variant 3)

p_{1},...,p_{k}
where p_k = 1  \textstyle\sum_{i=1}^{k1} p_i

\begin{bmatrix} \ln \dfrac{p_1}{p_k} \\[10pt] \vdots \\[5pt] \ln \dfrac{p_{k1}}{p_k} \\[15pt] 0 \end{bmatrix} =
\begin{bmatrix} \ln \dfrac{p_1}{1\sum_{i=1}^{k1}p_i} \\[10pt] \vdots \\[5pt] \ln \dfrac{p_{k1}}{1\sum_{i=1}^{k1}p_i} \\[15pt] 0 \end{bmatrix}

\begin{bmatrix} \dfrac{e^{\eta_1}}{\sum_{i=1}^{k}e^{\eta_i}} \\[10pt] \vdots \\[5pt] \dfrac{e^{\eta_k}}{\sum_{i=1}^{k}e^{\eta_i}} \end{bmatrix} =
\begin{bmatrix} \dfrac{e^{\eta_1}}{1+\sum_{i=1}^{k1}e^{\eta_i}} \\[10pt] \vdots \\[5pt] \dfrac{e^{\eta_{k1}}}{1+\sum_{i=1}^{k1}e^{\eta_i}} \\[15pt] \dfrac{1}{1+\sum_{i=1}^{k1}e^{\eta_i}} \end{bmatrix}

1

\begin{bmatrix} [x=1] \\ \vdots \\ \end{bmatrix}

\ln \left(\sum_{i=1}^{k} e^{\eta_i}\right) = \ln \left(1+\sum_{i=1}^{k1} e^{\eta_i}\right)

\ln p_k = \ln \left(1  \sum_{i=1}^{k1} p_i\right)

multinomial distribution (variant 1)
with known number of trials n

p_{1},...,p_{k}
where \textstyle\sum_{i=1}^k p_i=1

\begin{bmatrix} \ln p_1 \\ \vdots \\ \ln p_k \end{bmatrix}

\begin{bmatrix} e^{\eta_1} \\ \vdots \\ e^{\eta_k} \end{bmatrix}
where \textstyle\sum_{i=1}^k e^{\eta_i}=1

\frac{n!}{\prod_{i=1}^{k} x_i!}

\begin{bmatrix} x_1 \\ \vdots \\ x_k \end{bmatrix}

0

0

multinomial distribution (variant 2)
with known number of trials n

p_{1},...,p_{k}
where \textstyle\sum_{i=1}^k p_i=1

\begin{bmatrix} \ln p_1+C \\ \vdots \\ \ln p_k+C \end{bmatrix}

\begin{bmatrix} \dfrac{1}{C}e^{\eta_1} \\ \vdots \\ \dfrac{1}{C}e^{\eta_k} \end{bmatrix} =
\begin{bmatrix} \dfrac{e^{\eta_1}}{\sum_{i=1}^{k}e^{\eta_i}} \\[10pt] \vdots \\[5pt] \dfrac{e^{\eta_k}}{\sum_{i=1}^{k}e^{\eta_i}} \end{bmatrix}
where \textstyle\sum_{i=1}^k e^{\eta_i}=C

\frac{n!}{\prod_{i=1}^{k} x_i!}

\begin{bmatrix} x_1 \\ \vdots \\ x_k \end{bmatrix}

0

0

multinomial distribution (variant 3)
with known number of trials n

p_{1},...,p_{k}
where p_k = 1  \textstyle\sum_{i=1}^{k1} p_i

\begin{bmatrix} \ln \dfrac{p_1}{p_k} \\[10pt] \vdots \\[5pt] \ln \dfrac{p_{k1}}{p_k} \\[15pt] 0 \end{bmatrix} =
\begin{bmatrix} \ln \dfrac{p_1}{1\sum_{i=1}^{k1}p_i} \\[10pt] \vdots \\[5pt] \ln \dfrac{p_{k1}}{1\sum_{i=1}^{k1}p_i} \\[15pt] 0 \end{bmatrix}

\begin{bmatrix} \dfrac{e^{\eta_1}}{\sum_{i=1}^{k}e^{\eta_i}} \\[10pt] \vdots \\[5pt] \dfrac{e^{\eta_k}}{\sum_{i=1}^{k}e^{\eta_i}} \end{bmatrix} =
\begin{bmatrix} \dfrac{e^{\eta_1}}{1+\sum_{i=1}^{k1}e^{\eta_i}} \\[10pt] \vdots \\[5pt] \dfrac{e^{\eta_{k1}}}{1+\sum_{i=1}^{k1}e^{\eta_i}} \\[15pt] \dfrac{1}{1+\sum_{i=1}^{k1}e^{\eta_i}} \end{bmatrix}

\frac{n!}{\prod_{i=1}^{k} x_i!}

\begin{bmatrix} x_1 \\ \vdots \\ x_k \end{bmatrix}

n\ln \left(\sum_{i=1}^{k} e^{\eta_i}\right) = n\ln \left(1+\sum_{i=1}^{k1} e^{\eta_i}\right)

n\ln p_k = n\ln \left(1  \sum_{i=1}^{k1} p_i\right)

Dirichlet distribution

α_{1},...,α_{k}

\begin{bmatrix} \alpha_11 \\ \vdots \\ \alpha_k1 \end{bmatrix}

\begin{bmatrix} \eta_1+1 \\ \vdots \\ \eta_k+1 \end{bmatrix}

1

\begin{bmatrix} \ln x_1 \\ \vdots \\ \ln x_k \end{bmatrix}

\sum_{i=1}^k \ln \Gamma(\eta_i+1)  \ln \Gamma\left(\sum_{i=1}^k\Big(\eta_i+1\Big)\right)

\sum_{i=1}^k \ln \Gamma(\alpha_i)  \ln \Gamma\left(\sum_{i=1}^k\alpha_i\right)

Wishart distribution

V,n

\begin{bmatrix} \frac12\mathbf{V}^{1} \\[5pt] \dfrac{np1}{2} \end{bmatrix}

\begin{bmatrix} \frac12{\boldsymbol\eta_1}^{1} \\[5pt] 2\eta_2+p+1 \end{bmatrix}

1

\begin{bmatrix} \mathbf{X} \\ \ln\mathbf{X} \end{bmatrix}

\left(\eta_2+\frac{p+1}{2}\right)\ln\boldsymbol\eta_1
+ \ln\Gamma_p\left(\eta_2+\frac{p+1}{2}\right) =
\frac{n}{2}\ln\boldsymbol\eta_1 + \ln\Gamma_p\left(\frac{n}{2}\right) =
\left(\eta_2+\frac{p+1}{2}\right)(p\ln 2 + \ln\mathbf{V})
+ \ln\Gamma_p\left(\eta_2+\frac{p+1}{2}\right)

Three variants with different parameterizations are given, to facilitate computing moments of the sufficient statistics.

\frac{n}{2}(p\ln 2 + \ln\mathbf{V}) + \ln\Gamma_p\left(\frac{n}{2}\right)

NOTE: Uses the fact that {\rm tr}(\mathbf{A}^{\rm T}\mathbf{B}) = \operatorname{vec}(\mathbf{A}) \cdot \operatorname{vec}(\mathbf{B}), i.e. the trace of a matrix product is much like a dot product. The matrix parameters are assumed to be vectorized (laid out in a vector) when inserted into the exponential form. Also, V and X are symmetric, so e.g. \mathbf{V}^{\rm T} = \mathbf{V}.

inverse Wishart distribution

Ψ,m

\begin{bmatrix} \frac12\boldsymbol\Psi \\[5pt] \dfrac{m+p+1}{2} \end{bmatrix}

\begin{bmatrix} 2\boldsymbol\eta_1 \\[5pt] (2\eta_2+p+1) \end{bmatrix}

1

\begin{bmatrix} \mathbf{X}^{1} \\ \ln\mathbf{X} \end{bmatrix}

\left(\eta_2 + \frac{p + 1}{2}\right)\ln\boldsymbol\eta_1
+ \ln\Gamma_p\left(\Big(\eta_2 + \frac{p + 1}{2}\Big)\right) =
\frac{m}{2}\ln\boldsymbol\eta_1 + \ln\Gamma_p\left(\frac{m}{2}\right) =
\left(\eta_2 + \frac{p + 1}{2}\right)(p\ln 2  \ln\boldsymbol\Psi)
+ \ln\Gamma_p\left(\Big(\eta_2 + \frac{p + 1}{2}\Big)\right)

\frac{m}{2}(p\ln 2  \ln\boldsymbol\Psi) + \ln\Gamma_p\left(\frac{m}{2}\right)

normalgamma distribution

α,β,μ,λ

\begin{bmatrix} \alpha\frac12 \\ \beta\dfrac{\lambda\mu^2}{2} \\ \lambda\mu \\ \dfrac{\lambda}{2}\end{bmatrix}

\begin{bmatrix} \eta_1+\frac12 \\ \eta_2 + \dfrac{\eta_3^2}{4\eta_4} \\ \dfrac{\eta_3}{2\eta_4} \\ 2\eta_4 \end{bmatrix}

\dfrac{1}{\sqrt{2\pi}}

\begin{bmatrix} \ln \tau \\ \tau \\ \tau x \\ \tau x^2 \end{bmatrix}

\ln \Gamma\left(\eta_1+\frac12\right)  \frac12\ln\left(2\eta_4\right) 
 \left(\eta_1+\frac12\right)\ln\left(\eta_2 + \dfrac{\eta_3^2}{4\eta_4}\right)

\ln \Gamma\left(\alpha\right)\alpha\ln\beta\frac12\ln\lambda

The three variants of the categorical distribution and multinomial distribution are due to the fact that the parameters p_i are constrained, such that

\sum_{i=1}^{k} p_i = 1.
Thus, there are only k−1 independent parameters.

Variant 1 uses k natural parameters with a simple relation between the standard and natural parameters; however, only k−1 of the natural parameters are independent, and the set of k natural parameters is nonidentifiable. The constraint on the usual parameters translates to a similar constraint on the natural parameters.

Variant 2 demonstrates the fact that the entire set of natural parameters is nonidentifiable: Adding any constant value to the natural parameters has no effect on the resulting distribution. However, by using the constraint on the natural parameters, the formula for the normal parameters in terms of the natural parameters can be written in a way that is independent on the constant that is added.

Variant 3 shows how to make the parameters identifiable in a convenient way by setting C = \ln p_k . This effectively "pivots" around p_{k} and causes the last natural parameter to have the constant value of 0. All the remaining formulas are written in a way that does not access p_{k}, so that effectively the model has only k−1 parameters, both of the usual and natural kind.
Note also that variants 1 and 2 are not actually standard exponential families at all. Rather they are curved exponential families, i.e. there are k−1 independent parameters embedded in a kdimensional parameter space. Many of the standard results for exponential families do not apply to curved exponential families. An example is the logpartition function A(x), which has the value of 0 in the curved cases. In standard exponential families, the derivatives of this function correspond to the moments (more technically, the cumulants) of the sufficient statistics, e.g. the mean and variance. However, a value of 0 suggests that the mean and variance of all the sufficient statistics are uniformly 0, whereas in fact the mean of the ith sufficient statistic should be p_{i}. (This does emerge correctly when using the form of A(x) in variant 3.)
Moments and cumulants of the sufficient statistic
Normalization of the distribution
We start with the normalization of the probability distribution. In general, an arbitrary function f(x) that serves as the kernel of a probability distribution (the part encoding all dependence on x) can be made into a proper distribution by normalizing: i.e.

p(x) = \frac{1}{Z} f(x)
where

Z = \int_x f(x) dx.
The factor Z is sometimes termed the normalizer or partition function, based on an analogy to statistical physics.
In the case of an exponential family where

p(x; \boldsymbol\eta) = g(\boldsymbol\eta) h(x) e^{\boldsymbol\eta \cdot \mathbf{T}(x)},
the kernel is

K(x) = h(x) e^{\boldsymbol\eta \cdot \mathbf{T}(x)}
and the partition function is

Z = \int_x h(x) e^{\boldsymbol\eta \cdot \mathbf{T}(x)} dx.
Since the distribution must be normalized, we have

1 = \int_x g(\boldsymbol\eta) h(x) e^{\boldsymbol\eta \cdot \mathbf{T}(x)} dx = g(\boldsymbol\eta) \int_x h(x) e^{\boldsymbol\eta \cdot \mathbf{T}(x)} dx = g(\boldsymbol\eta) Z.
In other words,

g(\boldsymbol\eta) = \frac{1}{Z}
or equivalently

A(\boldsymbol\eta) =  \ln g(\boldsymbol\eta) = \ln Z.
This justifies calling A the lognormalizer or logpartition function.
Moment generating function of the sufficient statistic
Now, the moment generating function of T(x) is

M_T(u) \equiv E[e^{u^{\rm T} T(x)}\mid\eta] = \int_x h(x) e^{(\eta+u)^{\rm T} T(x)A(\eta)} dx = e^{A(\eta + u)A(\eta)}
proving the earlier statement that

K(u\mid\eta) = A(\eta+u)  A(\eta)
is the cumulant generating function for T.
An important subclass of the exponential family the natural exponential family has a similar form for the moment generating function for the distribution of x.
Differential identities for cumulants
In particular, using the properties of the cumulant generating function,

E(T_{j}) = \frac{ \partial A(\eta) }{ \partial \eta_{j} }
and

\mathrm{cov}\left (T_i,T_j \right) = \frac{ \partial^2 A(\eta) }{ \partial \eta_{i} \partial \eta_{j} }.
The first two raw moments and all mixed second moments can be recovered from these two identities. Higher order moments and cumulants are obtained by higher derivatives. This technique is often useful when T is a complicated function of the data, whose moments are difficult to calculate by integration.
Another way to see this that does not rely on the theory of cumulants is to begin from the fact that the distribution of an exponential family must be normalized, and differentiate. We illustrate using the simple case of a onedimensional parameter, but an analogous derivation holds more generally.
In the onedimensional case, we have

p(x) = g(\eta) h(x) e^{\eta T(x)} .
This must be normalized, so

1 = \int_x p(x) dx = \int_x g(\eta) h(x) e^{\eta T(x)} dx = g(\eta) \int_x h(x) e^{\eta T(x)} dx .
Take the derivative of both sides with respect to η:

\begin{align} 0 &= g(\eta) \frac{d}{d\eta} \int_x h(x) e^{\eta T(x)} dx + g'(\eta)\int_x h(x) e^{\eta T(x)} dx \\ &= g(\eta) \int_x h(x) \left(\frac{d}{d\eta} e^{\eta T(x)}\right) dx + g'(\eta)\int_x h(x) e^{\eta T(x)} dx \\ &= g(\eta) \int_x h(x) e^{\eta T(x)} T(x) dx + g'(\eta)\int_x h(x) e^{\eta T(x)} dx \\ &= \int_x T(x) g(\eta) h(x) e^{\eta T(x)} dx + \frac{g'(\eta)}{g(\eta)}\int_x g(\eta) h(x) e^{\eta T(x)} dx \\ &= \int_x T(x) p(x) dx + \frac{g'(\eta)}{g(\eta)}\int_x p(x) dx \\ &= \mathbb{E}[T(x)] + \frac{g'(\eta)}{g(\eta)} \\ &= \mathbb{E}[T(x)] + \frac{d}{d\eta} \ln g(\eta) \end{align}
Therefore,

\mathbb{E}[T(x)] =  \frac{d}{d\eta} \ln g(\eta) = \frac{d}{d\eta} A(\eta).
Example 1
As an introductory example, consider the gamma distribution, whose distribution is defined by

p(x) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha1}e^{\beta x}.
Referring to the above table, we can see that the natural parameter is given by

\eta_1 = \alpha1,

\eta_2 = \beta,
the reverse substitutions are

\alpha = \eta_1+1,

\beta = \eta_2,
the sufficient statistics are (\ln x, x), and the logpartition function is

A(\eta_1,\eta_2) = \ln \Gamma(\eta_1+1)(\eta_1+1)\ln(\eta_2).
We can find the mean of the sufficient statistics as follows. First, for η_{1}:

\begin{align} \mathbb{E}[\ln x] &= \frac{ \partial A(\eta_1,\eta_2) }{ \partial \eta_1 } = \frac{ \partial }{ \partial \eta_1 } \left(\ln\Gamma(\eta_1+1)  (\eta_1+1) \ln(\eta_2)\right) \\ &= \psi(\eta_1+1)  \ln(\eta_2) \\ &= \psi(\alpha)  \ln \beta, \end{align}
Where \psi(x) is the digamma function (derivative of log gamma), and we used the reverse substitutions in the last step.
Now, for η_{2}:

\begin{align} \mathbb{E}[x] &= \frac{ \partial A(\eta_1,\eta_2) }{ \partial \eta_2 } = \frac{ \partial }{ \partial \eta_2 } \left(\ln \Gamma(\eta_1+1)(\eta_1+1)\ln(\eta_2)\right) \\ &= (\eta_1+1)\frac{1}{\eta_2}(1) = \frac{\eta_1+1}{\eta_2} \\ &= \frac{\alpha}{\beta}, \end{align}
again making the reverse substitution in the last step.
To compute the variance of x, we just differentiate again:

\begin{align} \operatorname{Var}(x) &= \frac{\partial^2 A\left(\eta_1,\eta_2 \right)}{\partial \eta_2^2} = \frac{\partial}{\partial \eta_2} \frac{\eta_1+1}{\eta_2} \\ &= \frac{\eta_1+1}{\eta_2^2} \\ &= \frac{\alpha}{\beta^2}. \end{align}
All of these calculations can be done using integration, making use of various properties of the gamma function, but this requires significantly more work.
Example 2
As another example consider a real valued random variable X with density

p_\theta (x) = \frac{ \theta e^{x} }{\left(1 + e^{x} \right)^{\theta + 1} }
indexed by shape parameter \theta \in (0,\infty) (this is called the skewlogistic distribution). The density can be rewritten as

\frac{ e^{x} } { 1 + e^{x} } \exp\left( \theta \log\left(1 + e^{x} \right) + \log(\theta)\right)
Notice this is an exponential family with natural parameter

\eta = \theta,
sufficient statistic

T = \log\left (1 + e^{x} \right),
and logpartition function

A(\eta) = \log(\theta) = \log(\eta)
So using the first identity,

E(\log(1 + e^{X})) = E(T) = \frac{ \partial A(\eta) }{ \partial \eta } = \frac{ \partial }{ \partial \eta } [\log(\eta)] = \frac{1}{\eta} = \frac{1}{\theta},
and using the second identity

\mathrm{var}(\log\left(1 + e^{X} \right)) = \frac{ \partial^2 A(\eta) }{ \partial \eta^2 } = \frac{ \partial }{ \partial \eta } \left[\frac{1}{\eta}\right] = \frac{1}{(\eta)^2} = \frac{1}{\theta^2}.
This example illustrates a case where using this method is very simple, but the direct calculation would be nearly impossible.
Example 3
The final example is one where integration would be extremely difficult. This is the case of the Wishart distribution, which is defined over matrices. Even taking derivatives is a bit tricky, as it involves matrix calculus, but the respective identities are listed in that article.
From the above table, we can see that the natural parameter is given by

\boldsymbol\eta_1 = \frac12\mathbf{V}^{1},

\eta_2 = \frac{np1}{2},
the reverse substitutions are

\mathbf{V} = \frac12{\boldsymbol\eta_1}^{1},

n = 2\eta_2+p+1,
and the sufficient statistics are (\mathbf{X}, \ln\mathbf{X}).
The logpartition function is written in various forms in the table, to facilitate differentiation and backsubstitution. We use the following forms:

A(\boldsymbol\eta_1, n) = \frac{n}{2}\ln\boldsymbol\eta_1 + \ln\Gamma_p\left(\frac{n}{2}\right),

A(\mathbf{V},\eta_2) = \left(\eta_2+\frac{p+1}{2}\right)(p\ln 2 + \ln\mathbf{V}) + \ln\Gamma_p\left(\eta_2+\frac{p+1}{2}\right).

Expectation of X (associated with η_{1})
To differentiate with respect to η_{1}, we need the following matrix calculus identity:

\frac{\partial \ln a\mathbf{X}}{\partial \mathbf{X}} =(\mathbf{X}^{1})^{\rm T}
Then:

\begin{align} \mathbb{E}[\mathbf{X}] &= \frac{ \partial A\left(\boldsymbol\eta_1,\cdots \right) }{ \partial \boldsymbol\eta_1 } \\ &= \frac{ \partial }{ \partial \boldsymbol\eta_1 } \left[\frac{n}{2}\ln\boldsymbol\eta_1 + \ln\Gamma_p\left(\frac{n}{2}\right) \right] \\ &= \frac{n}{2}(\boldsymbol\eta_1^{1})^{\rm T} \\ &= \frac{n}{2}(\boldsymbol\eta_1^{1})^{\rm T} \\ &= n(\mathbf{V})^{\rm T} \\ &= n\mathbf{V} \end{align}
The last line uses the fact that V is symmetric, and therefore it is the same when transposed.

Expectation of ln X (associated with η_{2})
Now, for η_{2}, we first need to expand the part of the logpartition function that involves the multivariate gamma function:

\ln \Gamma_p(a)= \ln \left(\pi^{\frac{p(p1)}{4}}\prod_{j=1}^p \Gamma\left(a+\frac{1j}{2}\right)\right) = \frac{p(p1)}{4} \ln \pi + \sum_{j=1}^p \ln \Gamma\left[ a+\frac{1j}{2}\right]
We also need the digamma function:

\psi(x) = \frac{d}{dx} \ln \Gamma(x).
Then:

\begin{align} \mathbb{E}[\ln \mathbf{X}] &= \frac{\partial A\left (\cdots,\eta_2 \right)}{\partial \eta_2} \\ &= \frac{\partial}{\partial \eta_2} \left[\left(\eta_2+\frac{p+1}{2}\right)(p\ln 2 + \ln\mathbf{V}) + \ln\Gamma_p\left(\eta_2+\frac{p+1}{2}\right) \right] \\ &= \frac{\partial}{\partial \eta_2} \left[ \left(\eta_2+\frac{p+1}{2}\right)(p\ln 2 + \ln\mathbf{V}) + \frac{p(p1)}{4} \ln \pi + \sum_{j=1}^p \ln \Gamma\left(\eta_2+\frac{p+1}{2}+\frac{1j}{2}\right) \right] \\ &= p\ln 2 + \ln\mathbf{V} + \sum_{j=1}^p \psi\left(\eta_2+\frac{p+1}{2}+\frac{1j}{2}\right) \\ &= p\ln 2 + \ln\mathbf{V} + \sum_{j=1}^p \psi\left(\frac{np1}{2}+\frac{p+1}{2}+\frac{1j}{2}\right) \\ &= p\ln 2 + \ln\mathbf{V} + \sum_{j=1}^p \psi\left(\frac{n+1j}{2}\right) \end{align}
This latter formula is listed in the Wishart distribution article. Both of these expectations are needed when deriving the variational Bayes update equations in a Bayes network involving a Wishart distribution (which is the conjugate prior of the multivariate normal distribution).
Computing these formulas using integration would be much more difficult. The first one, for example, would require matrix integration.
Maximum entropy derivation
The exponential family arises naturally as the answer to the following question: what is the maximumentropy distribution consistent with given constraints on expected values?
The information entropy of a probability distribution dF(x) can only be computed with respect to some other probability distribution (or, more generally, a positive measure), and both measures must be mutually absolutely continuous. Accordingly, we need to pick a reference measure dH(x) with the same support as dF(x).
The entropy of dF(x) relative to dH(x) is

S[dF\mid dH]=\int \frac{dF}{dH}\ln\frac{dF}{dH}\,dH
or

S[dF\mid dH]=\int\ln\frac{dH}{dF}\,dF
where dF/dH and dH/dF are Radon–Nikodym derivatives. Note that the ordinary definition of entropy for a discrete distribution supported on a set I, namely

S=\sum_{i\in I} p_i\ln p_i
assumes, though this is seldom pointed out, that dH is chosen to be the counting measure on I.
Consider now a collection of observable quantities (random variables) T_{i}. The probability distribution dF whose entropy with respect to dH is greatest, subject to the conditions that the expected value of T_{i} be equal to t_{i}, is a member of the exponential family with dH as reference measure and (T_{1}, ..., T_{n}) as sufficient statistic.
The derivation is a simple variational calculation using Lagrange multipliers. Normalization is imposed by letting T_{0} = 1 be one of the constraints. The natural parameters of the distribution are the Lagrange multipliers, and the normalization factor is the Lagrange multiplier associated to T_{0}.
For examples of such derivations, see Maximum entropy probability distribution.
Role in statistics
Classical estimation: sufficiency
According to the Darmois theorem, among families of probability distributions whose domain does not vary with the parameter being estimated, only in exponential families is there a sufficient statistic whose dimension remains bounded as sample size increases.
Less tersely, suppose X_{k}, (where k = 1, 2, 3, ... n) are independent, identically distributed random variables. Only if their distribution is one of the exponential family of distributions is there a sufficient statistic T(X_{1}, ..., X_{n}) whose number of scalar components does not increase as the sample size n increases; the statistic T may be a vector or a single scalar number, but whatever it is, its size will neither grow nor shrink when more data are obtained.
Bayesian estimation: conjugate distributions
Exponential families are also important in Bayesian statistics. In Bayesian statistics a prior distribution is multiplied by a likelihood function and then normalised to produce a posterior distribution. In the case of a likelihood which belongs to the exponential family there exists a conjugate prior, which is often also in the exponential family. A conjugate prior π for the parameter \boldsymbol\eta of an exponential family

f(x\boldsymbol\eta) = h(x) \exp \left ( {\boldsymbol\eta}^{\rm T}\mathbf{T}(x) A(\boldsymbol\eta)\right )
is given by

p_\pi(\boldsymbol\eta\mid\boldsymbol\chi,\nu) = f(\boldsymbol\chi,\nu) \exp \left (\boldsymbol\eta^{\rm T} \boldsymbol\chi  \nu A(\boldsymbol\eta) \right ),
or equivalently

p_\pi(\boldsymbol\eta\mid\boldsymbol\chi,\nu) = f(\boldsymbol\chi,\nu) g(\boldsymbol\eta)^\nu \exp \left (\boldsymbol\eta^{\rm T} \boldsymbol\chi \right ), \qquad \boldsymbol\chi \in \mathbb{R}^s
where s is the dimension of \boldsymbol\eta and \nu > 0 and \boldsymbol\chi are hyperparameters (parameters controlling parameters). ν corresponds to the effective number of observations that the prior distribution contributes, and \boldsymbol\chi corresponds to the total amount that these pseudoobservations contribute to the sufficient statistic over all observations and pseudoobservations. f(\boldsymbol\chi,\nu) is a normalization constant that is automatically determined by the remaining functions and serves to ensure that the given function is a probability density function (i.e. it is normalized). A(\boldsymbol\eta) and equivalently g(\boldsymbol\eta) are the same functions as in the definition of the distribution over which π is the conjugate prior.
A conjugate prior is one which, when combined with the likelihood and normalised, produces a posterior distribution which is of the same type as the prior. For example, if one is estimating the success probability of a binomial distribution, then if one chooses to use a beta distribution as one's prior, the posterior is another beta distribution. This makes the computation of the posterior particularly simple. Similarly, if one is estimating the parameter of a Poisson distribution the use of a gamma prior will lead to another gamma posterior. Conjugate priors are often very flexible and can be very convenient. However, if one's belief about the likely value of the theta parameter of a binomial is represented by (say) a bimodal (twohumped) prior distribution, then this cannot be represented by a beta distribution. It can however be represented by using a mixture density as the prior, here a combination of two beta distributions; this is a form of hyperprior.
An arbitrary likelihood will not belong to the exponential family, and thus in general no conjugate prior exists. The posterior will then have to be computed by numerical methods.
To show that the above prior distribution is a conjugate prior, we can derive the posterior.
First, assume that the probability of a single observation follows an exponential family, parameterized using its natural parameter:

p_F(x\mid\boldsymbol \eta) = h(x) g(\boldsymbol\eta) \exp\left(\boldsymbol\eta^{\rm T} \mathbf{T}(x)\right)
Then, for data \mathbf{X} = (x_1,\ldots,x_n), the likelihood is computed as follows:

p(\mathbf{X}\mid\boldsymbol\eta) =\left(\prod_{i=1}^n h(x_i) \right) g(\boldsymbol\eta)^n \exp\left(\boldsymbol\eta^{\rm T}\sum_{i=1}^n \mathbf{T}(x_i) \right)
Then, for the above conjugate prior:

\begin{align}p_\pi(\boldsymbol\eta\mid\boldsymbol\chi,\nu) &= f(\boldsymbol\chi,\nu) g(\boldsymbol\eta)^\nu \exp(\boldsymbol\eta^{\rm T} \boldsymbol\chi) \propto g(\boldsymbol\eta)^\nu \exp(\boldsymbol\eta^{\rm T} \boldsymbol\chi)\end{align}
We can then compute the posterior as follows:

\begin{align} p(\boldsymbol\eta\mid\mathbf{X},\boldsymbol\chi,\nu)& \propto p(\mathbf{X}\mid\boldsymbol\eta) p_\pi(\boldsymbol\eta\mid\boldsymbol\chi,\nu) \\ &= \left(\prod_{i=1}^n h(x_i) \right) g(\boldsymbol\eta)^n \exp\left(\boldsymbol\eta^{\rm T} \sum_{i=1}^n \mathbf{T}(x_i)\right) f(\boldsymbol\chi,\nu) g(\boldsymbol\eta)^\nu \exp(\boldsymbol\eta^{\rm T} \boldsymbol\chi) \\ &\propto g(\boldsymbol\eta)^n \exp\left(\boldsymbol\eta^{\rm T}\sum_{i=1}^n \mathbf{T}(x_i)\right) g(\boldsymbol\eta)^\nu \exp(\boldsymbol\eta^{\rm T} \boldsymbol\chi) \\ &\propto g(\boldsymbol\eta)^{\nu + n} \exp\left(\boldsymbol\eta^{\rm T} \left(\boldsymbol\chi + \sum_{i=1}^n \mathbf{T}(x_i)\right)\right) \end{align}
The last line is the kernel of the prior distribution, i.e.

p(\boldsymbol\eta\mid\mathbf{X},\boldsymbol\chi,\nu) = p_\pi\left(\boldsymbol\eta\mid\boldsymbol\chi + \sum_{i=1}^n \mathbf{T}(x_i), \nu + n \right)
This shows that the posterior has the same form as the prior.
Note in particular that the data X enters into this equation only in the expression

\mathbf{T}(\mathbf{X}) = \sum_{i=1}^n \mathbf{T}(x_i),
which is termed the sufficient statistic of the data. That is, the value of the sufficient statistic is sufficient to completely determine the posterior distribution. The actual data points themselves are not needed, and all sets of data points with the same sufficient statistic will have the same distribution. This is important because the dimension of the sufficient statistic does not grow with the data size — it has only as many components as the components of \boldsymbol\eta (equivalently, the number of parameters of the distribution of a single data point).
The update equations are as follows:
\begin{align} \boldsymbol\chi' &= \boldsymbol\chi + \mathbf{T}(\mathbf{X}) \\ &= \boldsymbol\chi + \sum_{i=1}^n \mathbf{T}(x_i) \\ \nu' &= \nu + n \end{align}
This shows that the update equations can be written simply in terms of the number of data points and the sufficient statistic of the data. This can be seen clearly in the various examples of update equations shown in the conjugate prior page. Note also that because of the way that the sufficient statistic is computed, it necessarily involves sums of components of the data (in some cases disguised as products or other forms — a product can be written in terms of a sum of logarithms). The cases where the update equations for particular distributions don't exactly match the above forms are cases where the conjugate prior has been expressed using a different parameterization than the one that produces a conjugate prior of the above form — often specifically because the above form is defined over the natural parameter \boldsymbol\eta while conjugate priors are usually defined over the actual parameter \boldsymbol\theta .
Hypothesis testing: Uniformly most powerful tests
The oneparameter exponential family has a monotone nondecreasing likelihood ratio in the sufficient statistic T(x), provided that η(θ) is nondecreasing. As a consequence, there exists a uniformly most powerful test for testing the hypothesis H_{0}: θ ≥ θ_{0} vs. H_{1}: θ < θ_{0}.
Generalized linear models
The exponential family forms the basis for the distribution function used in generalized linear models, a class of model that encompass many of the commonly used regression models in statistics.
See also
References

^ Andersen, Erling (September 1970). "Sufficiency and Exponential Families for Discrete Sample Spaces".

^

^ Darmois, G. (1935). "Sur les lois de probabilites a estimation exhaustive". C.R. Acad. Sci. Paris (in French) 200: 1265–1266.

^

^ Kupperman, M. (1958) "Probabilities of Hypotheses and InformationStatistics in Sampling from ExponentialClass Populations", Annals of Mathematical Statistics, 9 (2), 571–575 JSTOR 2237349

^ Nielsen, Frank; Garcia, Vincent (2009). "Statistical exponential families: A digest with flash cards".
Further reading

Lehmann, E. L.; Casella, G. (1998). Theory of Point Estimation. pp. 2nd ed., sec. 1.5.

Keener, Robert W. (2006). Statistical Theory: Notes for a Course in Theoretical Statistics. Springer. pp. 27–28, 32–33.

Fahrmeier, Ludwig; Tutz, G. (1994). Multivariate statistical modelling based on generalized linear models. Springer. pp. 18–22, 345–349.
External links

A primer on the exponential family of distributions

Exponential family of distributions on the Earliest known uses of some of the words of mathematics

jMEF: A Java library for exponential families














Mixed continuousdiscrete univariate distributions













This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.