Yahoo Italia Ricerca nel Web

Risultati di ricerca

  1. In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions.

    • Probability Distribution
    • Operations on Random Vectors
    • Expected Value
    • Covariance and Cross-Covariance
    • Correlation and Cross-Correlation
    • Orthogonality
    • Independence
    • Characteristic Function
    • Further Properties
    • Applications

    Every random vector gives rise to a probability measure on R n {\displaystyle \mathbb {R} ^{n}} with the Borel algebra as the underlying sigma-algebra. This measure is also known as the joint probability distribution, the joint distribution, or the multivariate distribution of the random vector. The distributions of each of the component random var...

    Random vectors can be subjected to the same kinds of algebraic operations as can non-random vectors: addition, subtraction, multiplication by a scalar, and the taking of inner products.

    The expected value or mean of a random vector X {\displaystyle \mathbf {X} } is a fixed vector E ⁡ [ X ] {\displaystyle \operatorname {E} [\mathbf {X} ]} whose elements are the expected values of the respective random variables.: p.333

    Definitions

    The covariance matrix (also called second central moment or variance-covariance matrix) of an n × 1 {\displaystyle n\times 1} random vector is an n × n {\displaystyle n\times n} matrix whose (i,j)th element is the covariance between the ith and the jth random variables. The covariance matrix is the expected value, element by element, of the n × n {\displaystyle n\times n} matrix computed as [ X − E ⁡ [ X ] ] [ X − E ⁡ [ X ] ] T {\displaystyle [\mathbf {X} -\operatorname {E} [\mathbf {X} ]][\m...

    Properties

    The covariance matrix is a symmetric matrix, i.e.: p. 466 1. K X X T = K X X {\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }^{T}=\operatorname {K} _{\mathbf {X} \mathbf {X} }} . The covariance matrix is a positive semidefinite matrix, i.e.: p. 465 1. a T K X X ⁡ a ≥ 0 for all a ∈ R n {\displaystyle \mathbf {a} ^{T}\operatorname {K} _{\mathbf {X} \mathbf {X} }\mathbf {a} \geq 0\quad {\text{for all }}\mathbf {a} \in \mathbb {R} ^{n}} . The cross-covariance matrix Cov ⁡ [ Y , X ] {\...

    Uncorrelatedness

    Two random vectors X = ( X 1 , . . . , X m ) T {\displaystyle \mathbf {X} =(X_{1},...,X_{m})^{T}} and Y = ( Y 1 , . . . , Y n ) T {\displaystyle \mathbf {Y} =(Y_{1},...,Y_{n})^{T}} are called uncorrelatedif 1. E ⁡ [ X Y T ] = E ⁡ [ X ] E ⁡ [ Y ] T {\displaystyle \operatorname {E} [\mathbf {X} \mathbf {Y} ^{T}]=\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {Y} ]^{T}} . They are uncorrelated if and only if their cross-covariance matrix K X Y {\displaystyle \operatorname {K} _{\math...

    Definitions

    The correlation matrix (also called second moment) of an n × 1 {\displaystyle n\times 1} random vector is an n × n {\displaystyle n\times n} matrix whose (i,j)th element is the correlation between the ith and the jth random variables. The correlation matrix is the expected value, element by element, of the n × n {\displaystyle n\times n} matrix computed as X X T {\displaystyle \mathbf {X} \mathbf {X} ^{T}} , where the superscript T refers to the transpose of the indicated vector:: p.190 : p.3...

    Properties

    The correlation matrix is related to the covariance matrix by 1. R X X = K X X + E ⁡ [ X ] E ⁡ [ X ] T {\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {X} }=\operatorname {K} _{\mathbf {X} \mathbf {X} }+\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {X} ]^{T}} . Similarly for the cross-correlation matrix and the cross-covariance matrix: 1. R X Y = K X Y + E ⁡ [ X ] E ⁡ [ Y ] T {\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {Y} }=\operatorname {K} _{\mathbf {X} \math...

    Two random vectors of the same size X = ( X 1 , . . . , X n ) T {\displaystyle \mathbf {X} =(X_{1},...,X_{n})^{T}} and Y = ( Y 1 , . . . , Y n ) T {\displaystyle \mathbf {Y} =(Y_{1},...,Y_{n})^{T}} are called orthogonalif 1. E ⁡ [ X T Y ] = 0 {\displaystyle \operatorname {E} [\mathbf {X} ^{T}\mathbf {Y} ]=0} .

    Two random vectors X {\displaystyle \mathbf {X} } and Y {\displaystyle \mathbf {Y} } are called independent if for all x {\displaystyle \mathbf {x} } and y {\displaystyle \mathbf {y} } 1. F X , Y ( x , y ) = F X ( x ) ⋅ F Y ( y ) {\displaystyle F_{\mathbf {X,Y} }(\mathbf {x,y} )=F_{\mathbf {X} }(\mathbf {x} )\cdot F_{\mathbf {Y} }(\mathbf {y} )} wh...

    The characteristic function of a random vector X {\displaystyle \mathbf {X} } with n {\displaystyle n} components is a function R n → C {\displaystyle \mathbb {R} ^{n}\to \mathbb {C} } that maps every vector ω = ( ω 1 , … , ω n ) T {\displaystyle \mathbf {\omega } =(\omega _{1},\ldots ,\omega _{n})^{T}} to a complex number. It is defined by: p. 468...

    Expectation of a quadratic form

    One can take the expectation of a quadratic form in the random vector X {\displaystyle \mathbf {X} } as follows:: p.170–171 1. E ⁡ [ X T A X ] = E ⁡ [ X ] T A E ⁡ [ X ] + tr ⁡ ( A K X X ) , {\displaystyle \operatorname {E} [\mathbf {X} ^{T}A\mathbf {X} ]=\operatorname {E} [\mathbf {X} ]^{T}A\operatorname {E} [\mathbf {X} ]+\operatorname {tr} (AK_{\mathbf {X} \mathbf {X} }),} where K X X {\displaystyle K_{\mathbf {X} \mathbf {X} }} is the covariance matrix of X {\displaystyle \mathbf {X} } and...

    Expectation of the product of two different quadratic forms

    One can take the expectation of the product of two different quadratic forms in a zero-mean Gaussian random vector X {\displaystyle \mathbf {X} } as follows:: pp. 162–176 1. E ⁡ [ ( X T A X ) ( X T B X ) ] = 2 tr ⁡ ( A K X X B K X X ) + tr ⁡ ( A K X X ) tr ⁡ ( B K X X ) {\displaystyle \operatorname {E} \left[(\mathbf {X} ^{T}A\mathbf {X} )(\mathbf {X} ^{T}B\mathbf {X} )\right]=2\operatorname {tr} (AK_{\mathbf {X} \mathbf {X} }BK_{\mathbf {X} \mathbf {X} })+\operatorname {tr} (AK_{\mathbf {X}...

    Portfolio theory

    In portfolio theory in finance, an objective often is to choose a portfolio of risky assets such that the distribution of the random portfolio return has desirable properties. For example, one might want to choose the portfolio return having the lowest variance for a given expected value. Here the random vector is the vector r {\displaystyle \mathbf {r} } of random returns on the individual assets, and the portfolio return p (a random scalar) is the inner product of the vector of random retur...

    Regression theory

    In linear regression theory, we have data on n observations on a dependent variable y and n observations on each of k independent variables xj. The observations on the dependent variable are stacked into a column vector y; the observations on each independent variable are also stacked into column vectors, and these latter column vectors are combined into a design matrix X(not denoting a random vector in this context) of observations on the independent variables. Then the following regression...

    Vector time series

    The evolution of a k×1 random vector X {\displaystyle \mathbf {X} } through time can be modelled as a vector autoregression(VAR) as follows: 1. X t = c + A 1 X t − 1 + A 2 X t − 2 + ⋯ + A p X t − p + e t , {\displaystyle \mathbf {X} _{t}=c+A_{1}\mathbf {X} _{t-1}+A_{2}\mathbf {X} _{t-2}+\cdots +A_{p}\mathbf {X} _{t-p}+\mathbf {e} _{t},\,} where the i-periods-back vector observation X t − i {\displaystyle \mathbf {X} _{t-i}} is called the i-th lag of X {\displaystyle \mathbf {X} } , c is a k ×...

  2. A multivariate normal distribution is a vector in multiple normally distributed variables, such that any linear combination of the variables is also normally distributed.

  3. Descrizione. La distribuzione normale è considerata il caso base delle distribuzioni di probabilità continue a causa del suo ruolo nel teorema del limite centrale. Un insieme di valori dato potrebbe essere normale: per stabilirlo si può usare un test di normalità .

  4. 23 apr 2022 · The multivariate normal distribution is among the most important of multivariate distributions, particularly in statistical inference and the study of Gaussian processes such as Brownian motion. The distribution arises naturally from linear transformations of independent normal variables.