Q-function: Difference between revisions
Undid revision 436637762 by 85.195.130.37 (talk)change doesn't match definition given immediately above |
Adding/improving reference(s) |
||
(138 intermediate revisions by 81 users not shown) | |||
Line 1: | Line 1: | ||
{{Short description|Statistics function}} |
|||
{{For|the phase-space function representing a quantum state|Husimi Q representation}} |
|||
[[Image:Q-function.png|thumb|right|400px|A plot of the Q-function.]] |
[[Image:Q-function.png|thumb|right|400px|A plot of the Q-function.]] |
||
In [[statistics]], the '''Q-function''' is the [[tail |
In [[statistics]], the '''Q-function''' is the [[Cumulative distribution function#Complementary cumulative distribution function (tail distribution)|tail distribution function]] of the [[Normal distribution#Standard normal distribution|standard normal distribution]].<ref>{{cite web|url=http://cnx.org/content/m11537/latest/|title=The Q-function|website=[[cnx.org]]|archive-url=https://web.archive.org/web/20120229030808/http://cnx.org/content/m11537/latest/|archive-date=2012-02-29}}</ref><ref name="jo">{{cite web|url=http://www.eng.tau.ac.il/~jo/academic/Q.pdf|title=Basic properties of the Q-function|archive-url=https://web.archive.org/web/20090325160012/http://www.eng.tau.ac.il/~jo/academic/Q.pdf|archive-date=2009-03-25 |date=2009-03-05 }}</ref> In other words, <math>Q(x)</math> is the probability that a normal (Gaussian) [[random variable]] will obtain a value larger than <math>x</math> standard deviations. Equivalently, <math>Q(x)</math> is the probability that a standard normal random variable takes a value larger than <math>x</math>. |
||
If <math>Y</math> is a Gaussian random variable with mean <math>\mu</math> and variance <math>\sigma^2</math>, then <math>X = \frac{Y-\mu}{\sigma}</math> is [[Normal distribution#Standard normal distribution|standard normal]] and |
|||
:<math>P(Y > y) = P(X > x) = Q(x)</math> |
|||
where <math>x = \frac{y-\mu}{\sigma}</math>. |
|||
Other definitions of the ''Q''-function, all of which are simple transformations of the normal [[cumulative distribution function]], are also used occasionally.<ref>[http://mathworld.wolfram.com/NormalDistributionFunction.html Normal Distribution Function – from Wolfram MathWorld<!-- Bot generated title -->]</ref> |
|||
Because of its relation to the [[cumulative distribution function]] of the normal distribution, the ''Q''-function can also be expressed in terms of the [[error function]], which is an important function in applied mathematics and physics. |
|||
== Definition and basic properties == |
== Definition and basic properties == |
||
Formally, the Q-function is defined as |
Formally, the ''Q''-function is defined as |
||
:<math> |
|||
Q(x) = \frac{1}{\sqrt{2\pi}} \int_x^\infty \exp\Bigl(-\frac{u^2}{2}\Bigr) \, du.</math> |
|||
Thus, |
|||
:<math>Q(x) = 1 - Q(-x) = 1 - \Phi(x)\,\!,</math> |
|||
where <math>\Phi(x)</math> is the [[Standard_normal_distribution#Cumulative_distribution_function|cumulative distribution function of the normal Gaussian distribution]]. |
|||
:<math>Q(x) = \frac{1}{\sqrt{2\pi}} \int_x^\infty \exp\left(-\frac{u^2}{2}\right) \, du.</math> |
|||
The Q-function can be expressed in terms of the [[error function]] as<ref name="jo"/> |
|||
:<math> |
|||
Q(x) =\tfrac{1}{2} - \tfrac{1}{2} \operatorname{erf} \Bigl( \frac{x}{\sqrt{2}} \Bigr)=\tfrac{1}{2}\operatorname{erfc}(\frac{x}{\sqrt{2}}). |
|||
</math> |
|||
Thus, |
|||
== Bounds == |
|||
*The Q-function is not an [[elementary function]]. However, the bounds |
|||
:<math> |
|||
\frac{x}{1+x^2} \cdot \frac{1}{\sqrt{2\pi}} e^{-x^2/2} < Q(x) < \frac{1}{x} \cdot \frac{1}{\sqrt{2 \pi}}e^{-x^2/2}, \qquad x>0, |
|||
</math> |
|||
become increasingly tight for large ''x'', and are often useful. |
|||
:<math>Q(x) = 1 - Q(-x) = 1 - \Phi(x)\,\!,</math> |
|||
Using the [[integration by substitution|substitution]] <math>v=u^2/2</math> and defining <math>\varphi(x) = \tfrac{1}{\sqrt{2\pi}} e^{-x^2/2},</math> the upper bound is derived as follows: |
|||
where <math>\Phi(x)</math> is the [[Standard normal distribution#Cumulative distribution function|cumulative distribution function of the standard normal Gaussian distribution]]. |
|||
:<math> |
|||
\begin{align} |
|||
Q(x) |
|||
&=\int_x^\infty\varphi(u)\,du\\ |
|||
&<\int_x^\infty\frac ux\varphi(u)\,du |
|||
=\int_{x^2/2}^\infty\frac{e^{-v}}{x\sqrt{2\pi}}\,dv |
|||
=-\biggl.\frac{e^{-v}}{x\sqrt{2\pi}}\biggr|_{x^2/2}^\infty |
|||
=\frac{\varphi(x)}{x}. |
|||
\end{align} |
|||
</math> |
|||
The ''Q''-function can be expressed in terms of the [[error function]], or the complementary error function, as<ref name="jo"/> |
|||
Similarly, using <math>\scriptstyle\varphi'(u)\,{=}\,-u\,\varphi(u)</math> and the [[quotient rule]], |
|||
:<math> |
:<math> |
||
\begin{align} |
\begin{align} |
||
Q(x) &=\frac{1}{2}\left( \frac{2}{\sqrt{\pi}} \int_{x/\sqrt{2}}^\infty \exp\left(-t^2\right) \, dt \right)\\ |
|||
\Bigl(1+\frac1{x^2}\Bigr)Q(x) |
|||
&= \frac{1}{2} - \frac{1}{2} \operatorname{erf} \left( \frac{x}{\sqrt{2}} \right) ~~\text{ -or-}\\ |
|||
&=\int_x^\infty \Bigl(1+\frac1{x^2}\Bigr)\varphi(u)\,du\\ |
|||
&= \frac{1}{2}\operatorname{erfc} \left(\frac{x}{\sqrt{2}} \right). |
|||
&>\int_x^\infty \Bigl(1+\frac1{u^2}\Bigr)\varphi(u)\,du |
|||
=-\biggl.\frac{\varphi(u)}u\biggr|_x^\infty |
|||
=\frac{\varphi(x)}x. |
|||
\end{align} |
\end{align} |
||
</math> |
</math> |
||
An alternative form of the ''Q''-function known as Craig's formula, after its discoverer, is expressed as:<ref>{{cite book |doi=10.1109/MILCOM.1991.258319 |chapter-url=http://wsl.stanford.edu/~ee359/craig.pdf|chapter=A new, simple and exact result for calculating the probability of error for two-dimensional signal constellations|title=MILCOM 91 - Conference record|pages=571–575|year=1991|last1=Craig|first1=J.W.|isbn=0-87942-691-8|s2cid=16034807}}</ref> |
|||
Solving for <math>Q(x)</math> provides the lower bound. |
|||
:<math>Q(x) = \frac{1}{\pi} \int_0^{\frac{\pi}{2}} \exp \left( - \frac{x^2}{2 \sin^2 \theta} \right) d\theta.</math> |
|||
*[[Chernoff bound]] of Q-function is |
|||
:<math> |
|||
This expression is valid only for positive values of ''x'', but it can be used in conjunction with ''Q''(''x'') = 1 − ''Q''(−''x'') to obtain ''Q''(''x'') for negative values. This form is advantageous in that the range of integration is fixed and finite. |
|||
\begin{align} |
|||
Q(x)\leq \frac{1}{2}e^{-\frac{x^2}{2}}, \qquad x>0 |
|||
Craig's formula was later extended by Behnad (2020)<ref>{{cite journal |doi=10.1109/TCOMM.2020.2986209 |title=A Novel Extension to Craig's Q-Function Formula and Its Application in Dual-Branch EGC Performance Analysis|journal=IEEE Transactions on Communications |volume=68|issue=7|pages=4117–4125|year=2020|last1=Behnad|first1=Aydin|s2cid=216500014}}</ref> for the ''Q''-function of the sum of two non-negative variables, as follows: |
|||
\end{align} |
|||
</math> |
|||
:[[File:Q function complex plot plotted with Mathematica 13.1 ComplexPlot3D.svg|alt=the Q-function plotted in the complex plane|thumb|the Q-function plotted in the complex plane]]<math>Q(x+y) = \frac{1}{\pi} \int_0^{\frac{\pi}{2}} \exp \left( - \frac{x^2}{2 \sin^2 \theta} - \frac{y^2}{2 \cos^2 \theta} \right) d\theta, \quad x,y \geqslant 0 .</math> |
|||
==Bounds and approximations== |
|||
*The ''Q''-function is not an [[elementary function]]. However, it can be upper and lower bounded as,<ref name = "Gordon">{{Cite journal |doi = |title = Values of Mills’ ratio of area to bounding ordinate and of the normal probability integral for large values of the argument| journal = Ann. Math. Stat.|volume = 12|issue = |pages = 364-366|year = 1941|last = Gordon|first = R.D.}}</ref><ref name = "Borjesson">{{Cite journal |doi = 10.1109/TCOM.1979.1094433|title = Simple Approximations of the Error Function Q(x) for Communications Applications|journal = IEEE Transactions on Communications|volume = 27|issue = 3|pages = 639–643|year = 1979|last1 = Borjesson|first1 = P.|last2 = Sundberg|first2 = C.-E.}}</ref> |
|||
::<math>\left (\frac{x}{1+x^2} \right ) \phi(x) < Q(x) < \frac{\phi(x)}{x}, \qquad x>0,</math> |
|||
:where <math>\phi(x)</math> is the density function of the standard normal distribution, and the bounds become increasingly tight for large ''x''. |
|||
:Using the [[integration by substitution|substitution]] ''v'' =''u''<sup>2</sup>/2, the upper bound is derived as follows: |
|||
::<math>Q(x) =\int_x^\infty\phi(u)\,du <\int_x^\infty\frac ux\phi(u)\,du =\int_{\frac{x^2}{2}}^\infty\frac{e^{-v}}{x\sqrt{2\pi}}\,dv=-\biggl.\frac{e^{-v}}{x\sqrt{2\pi}}\biggr|_{\frac{x^2}{2}}^\infty=\frac{\phi(x)}{x}.</math> |
|||
:Similarly, using <math>\phi'(u) = - u \phi(u)</math> and the [[quotient rule]], |
|||
::<math>\left(1+\frac1{x^2}\right)Q(x) =\int_x^\infty \left(1+\frac1{x^2}\right)\phi(u)\,du >\int_x^\infty \left(1+\frac1{u^2}\right)\phi(u)\,du =-\biggl.\frac{\phi(u)}u\biggr|_x^\infty |
|||
=\frac{\phi(x)}x. </math> |
|||
:Solving for ''Q''(''x'') provides the lower bound. |
|||
:The [[geometric mean]] of the upper and lower bound gives a suitable approximation for <math>Q(x)</math>: |
|||
::<math>Q(x) \approx \frac{\phi(x)}{\sqrt{1 + x^2}}, \qquad x \geq 0. </math> |
|||
* Tighter bounds and approximations of <math>Q(x)</math> can also be obtained by optimizing the following expression <ref name = "Borjesson"/> |
|||
:: <math> \tilde{Q}(x) = \frac{\phi(x)}{(1-a)x + a\sqrt{x^2 + b}}. </math> |
|||
:For <math>x \geq 0</math>, the best upper bound is given by <math>a = 0.344</math> and <math>b = 5.334</math> with maximum absolute relative error of 0.44%. Likewise, the best approximation is given by <math>a = 0.339</math> and <math>b = 5.510</math> with maximum absolute relative error of 0.27%. Finally, the best lower bound is given by <math>a = 1/\pi</math> and <math>b = 2 \pi</math> with maximum absolute relative error of 1.17%. |
|||
*The [[Chernoff bound]] of the ''Q''-function is |
|||
::<math>Q(x)\leq e^{-\frac{x^2}{2}}, \qquad x>0</math> |
|||
*Improved exponential bounds and a pure exponential approximation are <ref>{{cite journal |url=http://campus.unibo.it/85943/1/mcddmsTranWIR2003.pdf |doi=10.1109/TWC.2003.814350|title=New exponential bounds and approximations for the computation of error probability in fading channels|journal=IEEE Transactions on Wireless Communications|volume=24|issue=5|pages=840–845|year=2003|last1=Chiani|first1=M.|last2=Dardari|first2=D.|last3=Simon|first3=M.K.}}</ref> |
|||
::<math>Q(x)\leq \tfrac{1}{4}e^{-x^2}+\tfrac{1}{4}e^{-\frac{x^2}{2}} \leq \tfrac{1}{2}e^{-\frac{x^2}{2}}, \qquad x>0</math> |
|||
:: <math>Q(x)\approx \frac{1}{12}e^{-\frac{x^2}{2}}+\frac{1}{4}e^{-\frac{2}{3} x^2}, \qquad x>0 </math> |
|||
*The above were generalized by Tanash & Riihonen (2020),<ref>{{cite journal |doi=10.1109/TCOMM.2020.3006902|title=Global minimax approximations and bounds for the Gaussian Q-function by sums of exponentials|journal=IEEE Transactions on Communications|year=2020|last1=Tanash|first1=I.M.|last2=Riihonen|first2=T.|volume=68|issue=10|pages=6514–6524|arxiv=2007.06939|s2cid=220514754}}</ref> who showed that <math>Q(x)</math> can be accurately approximated or bounded by |
|||
::<math>\tilde{Q}(x) = \sum_{n=1}^N a_n e^{-b_n x^2}.</math> |
|||
:In particular, they presented a systematic methodology to solve the numerical coefficients <math>\{(a_n,b_n)\}_{n=1}^N</math> that yield a [[minimax approximation algorithm|minimax]] approximation or bound: <math>Q(x) \approx \tilde{Q}(x)</math>, <math>Q(x) \leq \tilde{Q}(x)</math>, or <math>Q(x) \geq \tilde{Q}(x)</math> for <math>x\geq0</math>. With the example coefficients tabulated in the paper for <math>N = 20</math>, the relative and absolute approximation errors are less than <math>2.831 \cdot 10^{-6}</math> and <math>1.416 \cdot 10^{-6}</math>, respectively. The coefficients <math>\{(a_n,b_n)\}_{n=1}^N</math> for many variations of the exponential approximations and bounds up to <math>N = 25</math> have been released to open access as a comprehensive dataset.<ref>{{cite journal |doi=10.5281/zenodo.4112978|title=Coefficients for Global Minimax Approximations and Bounds for the Gaussian Q-Function by Sums of Exponentials [Data set]|url=https://zenodo.org/record/4112978|website=Zenodo|year=2020|last1=Tanash|first1=I.M.|last2=Riihonen|first2=T.}}</ref> |
|||
*Another approximation of <math>Q(x)</math> for <math>x \in [0,\infty)</math> is given by [[George Karagiannidis|Karagiannidis]] & Lioumpas (2007)<ref>{{cite journal |doi=10.1109/LCOMM.2007.070470 |url=http://users.auth.gr/users/9/3/028239/public_html/pdf/Q_Approxim.pdf|title=An Improved Approximation for the Gaussian Q-Function|journal=IEEE Communications Letters|volume=11|issue=8|pages=644–646|year=2007|last1=Karagiannidis|first1=George|last2=Lioumpas|first2=Athanasios|s2cid=4043576}}</ref> who showed for the appropriate choice of parameters <math>\{A, B\}</math> that |
|||
:: <math>f(x; A, B) = \frac{\left(1 - e^{-Ax}\right)e^{-x^2}}{B\sqrt{\pi} x} \approx \operatorname{erfc} \left(x\right).</math> |
|||
: The absolute error between <math>f(x; A, B)</math> and <math>\operatorname{erfc}(x)</math> over the range <math>[0, R]</math> is minimized by evaluating |
|||
:: <math>\{A, B\} = \underset{\{A,B\}}{\arg \min} \frac{1}{R} \int_0^R | f(x; A, B) - \operatorname{erfc}(x) |dx.</math> |
|||
: Using <math>R = 20</math> and numerically integrating, they found the minimum error occurred when <math>\{A, B\} = \{1.98, 1.135\},</math> which gave a good approximation for <math>\forall x \ge 0.</math> |
|||
: Substituting these values and using the relationship between <math>Q(x)</math> and <math>\operatorname{erfc}(x)</math> from above gives |
|||
:: <math> Q(x)\approx\frac{\left( 1-e^{\frac{-1.98x} {\sqrt{2}}}\right) e^{-\frac{x^{2}}{2}}}{1.135\sqrt{2\pi}x}, x \ge 0. </math> |
|||
: Alternative coefficients are also available for the above 'Karagiannidis–Lioumpas approximation' for tailoring accuracy for a specific application or transforming it into a tight bound.<ref>{{cite journal |doi=10.1109/LCOMM.2021.3052257|title=Improved coefficients for the Karagiannidis–Lioumpas approximations and bounds to the Gaussian Q-function|journal=IEEE Communications Letters|year=2021|last1=Tanash|first1=I.M.|last2=Riihonen|first2=T.|volume=25|issue=5|pages=1468–1471|arxiv=2101.07631|s2cid=231639206}}</ref> |
|||
*A tighter and more tractable approximation of <math>Q(x)</math> for positive arguments <math>x \in [0,\infty)</math> is given by López-Benítez & Casadevall (2011)<ref>{{cite journal |doi=10.1109/TCOMM.2011.012711.100105 |url=http://www.lopezbenitez.es/journals/IEEE_TCOM_2011.pdf|title=Versatile, Accurate, and Analytically Tractable Approximation for the Gaussian Q-Function|journal=IEEE Transactions on Communications|volume=59|issue=4|pages=917–922|year=2011|last1=Lopez-Benitez|first1=Miguel|last2=Casadevall|first2=Fernando|s2cid=1145101}}</ref> based on a second-order exponential function: |
|||
:: <math> Q(x) \approx e^{-ax^2-bx-c}, \qquad x \ge 0. </math> |
|||
: The fitting coefficients <math> (a,b,c) </math> can be optimized over any desired range of arguments in order to minimize the sum of square errors (<math>a = 0.3842</math>, <math>b = 0.7640</math>, <math>c = 0.6964</math> for <math>x \in [0,20]</math>) or minimize the maximum absolute error (<math>a = 0.4920</math>, <math>b = 0.2887</math>, <math>c = 1.1893</math> for <math>x \in [0,20]</math>). This approximation offers some benefits such as a good trade-off between accuracy and analytical tractability (for example, the extension to any arbitrary power of <math>Q(x)</math> is trivial and does not alter the algebraic form of the approximation). |
|||
==Inverse ''Q''== |
|||
The inverse ''Q''-function can be related to the [[error function#Inverse functions|inverse error functions]]: |
|||
:<math>Q^{-1}(y) = \sqrt{2}\ \mathrm{erf}^{-1}(1-2y) = \sqrt{2}\ \mathrm{erfc}^{-1}(2y)</math> |
|||
The function <math>Q^{-1}(y)</math> finds application in digital communications. It is usually expressed in [[Decibel#Field quantities and root-power quantities|dB]] and generally called '''Q-factor''': |
|||
:<math>\mathrm{Q\text{-}factor} = 20 \log_{10}\!\left(Q^{-1}(y)\right)\!~\mathrm{dB}</math> |
|||
where ''y'' is the bit-error rate (BER) of the digitally modulated signal under analysis. For instance, for [[quadrature phase-shift keying]] (QPSK) in additive white Gaussian noise, the Q-factor defined above coincides with the value in dB of the [[Signal-to-noise ratio#Decibels|signal to noise ratio]] that yields a bit error rate equal to ''y''. |
|||
[[File:Q-factor vs BER.png|thumb|none|400px|Q-factor vs. bit error rate (BER).]] |
|||
== Values == |
== Values == |
||
The ''Q''-function is well tabulated and can be computed directly in most of the mathematical software packages such as |
The ''Q''-function is well tabulated and can be computed directly in most of the mathematical software packages such as [[R (programming language)|R]] and those available in [[Python (programming language)|Python]], [[MATLAB]] and [[Wolfram Mathematica|Mathematica]]. Some values of the ''Q''-function are given below for reference. |
||
<!-- This table was calculated in Matlab as follows: |
<!-- This table was calculated in Matlab as follows: |
||
x=0:0.1: |
x=0:0.1:6; |
||
y = qfunc(x); |
|||
for i=1:length(x), |
|||
fprintf('Q(%.1f) = %.9f<br/>\n',x(i),qfunc(x(i))); |
|||
fprintf('Q(%.1f) = %.9f = 1/%.4f<br/>\n',x(i),y(i),1/y(i)); |
|||
end; |
end; |
||
--> |
--> |
||
{{col-begin}} |
{{col-begin}} |
||
{{col-4}} |
{{col-4}} |
||
{|class="wikitable" |
|||
''Q''(0.0) = 0.500000000<br/> |
|||
! scope="row" | ''Q''(0.0) |
|||
''Q''(0.1) = 0.460172163<br/> |
|||
| 0.500000000 || 1/2.0000 |
|||
''Q''(0.2) = 0.420740291<br/> |
|||
|- |
|||
''Q''(0.3) = 0.382088578<br/> |
|||
! scope="row" | ''Q''(0.1) |
|||
''Q''(0.4) = 0.344578258<br/> |
|||
| 0.460172163 || 1/2.1731 |
|||
''Q''(0.5) = 0.308537539<br/> |
|||
|- |
|||
''Q''(0.6) = 0.274253118<br/> |
|||
! scope="row" | ''Q''(0.2) |
|||
''Q''(0.7) = 0.241963652<br/> |
|||
| 0.420740291 || 1/2.3768 |
|||
''Q''(0.8) = 0.211855399<br/> |
|||
|- |
|||
''Q''(0.9) = 0.184060125<br/> |
|||
! scope="row" | ''Q''(0.3) |
|||
| 0.382088578 || 1/2.6172 |
|||
|- |
|||
! scope="row" | ''Q''(0.4) |
|||
| 0.344578258 || 1/2.9021 |
|||
|- |
|||
! scope="row" | ''Q''(0.5) |
|||
| 0.308537539 || 1/3.2411 |
|||
|- |
|||
! scope="row" | ''Q''(0.6) |
|||
| 0.274253118 || 1/3.6463 |
|||
|- |
|||
! scope="row" | ''Q''(0.7) |
|||
| 0.241963652 || 1/4.1329 |
|||
|- |
|||
! scope="row" | ''Q''(0.8) |
|||
| 0.211855399 || 1/4.7202 |
|||
|- |
|||
! scope="row" | ''Q''(0.9) |
|||
| 0.184060125 || 1/5.4330 |
|||
|} |
|||
{{col-4}} |
{{col-4}} |
||
{|class="wikitable" |
|||
''Q''(1.0) = 0.158655254<br/> |
|||
! scope="row" | ''Q''(1.0) |
|||
''Q''(1.1) = 0.135666061<br/> |
|||
| 0.158655254 || 1/6.3030 |
|||
''Q''(1.2) = 0.115069670<br/> |
|||
|- |
|||
''Q''(1.3) = 0.096800485<br/> |
|||
! scope="row" | ''Q''(1.1) |
|||
''Q''(1.4) = 0.080756659<br/> |
|||
| 0.135666061 || 1/7.3710 |
|||
''Q''(1.5) = 0.066807201<br/> |
|||
|- |
|||
''Q''(1.6) = 0.054799292<br/> |
|||
! scope="row" | ''Q''(1.2) |
|||
''Q''(1.7) = 0.044565463<br/> |
|||
| 0.115069670 || 1/8.6904 |
|||
''Q''(1.8) = 0.035930319<br/> |
|||
|- |
|||
''Q''(1.9) = 0.028716560<br/> |
|||
! scope="row" | ''Q''(1.3) |
|||
| 0.096800485 || 1/10.3305 |
|||
|- |
|||
! scope="row" | ''Q''(1.4) |
|||
| 0.080756659 || 1/12.3829 |
|||
|- |
|||
! scope="row" | ''Q''(1.5) |
|||
| 0.066807201 || 1/14.9684 |
|||
|- |
|||
! scope="row" | ''Q''(1.6) |
|||
| 0.054799292 || 1/18.2484 |
|||
|- |
|||
! scope="row" | ''Q''(1.7) |
|||
| 0.044565463 || 1/22.4389 |
|||
|- |
|||
! scope="row" | ''Q''(1.8) |
|||
| 0.035930319 || 1/27.8316 |
|||
|- |
|||
! scope="row" | ''Q''(1.9) |
|||
| 0.028716560 || 1/34.8231 |
|||
|} |
|||
{{col-4}} |
{{col-4}} |
||
{|class="wikitable" |
|||
''Q''(2.0) = 0.022750132<br/> |
|||
! scope="row" | ''Q''(2.0) |
|||
''Q''(2.1) = 0.017864421<br/> |
|||
| 0.022750132 || 1/43.9558 |
|||
''Q''(2.2) = 0.013903448<br/> |
|||
|- |
|||
''Q''(2.3) = 0.010724110<br/> |
|||
! scope="row" | ''Q''(2.1) |
|||
''Q''(2.4) = 0.008197536<br/> |
|||
| 0.017864421 || 1/55.9772 |
|||
''Q''(2.5) = 0.006209665<br/> |
|||
|- |
|||
''Q''(2.6) = 0.004661188<br/> |
|||
! scope="row" | ''Q''(2.2) |
|||
''Q''(2.7) = 0.003466974<br/> |
|||
| 0.013903448 || 1/71.9246 |
|||
''Q''(2.8) = 0.002555130<br/> |
|||
|- |
|||
''Q''(2.9) = 0.001865813<br/> |
|||
! scope="row" | ''Q''(2.3) |
|||
| 0.010724110 || 1/93.2478 |
|||
|- |
|||
! scope="row" | ''Q''(2.4) |
|||
| 0.008197536 || 1/121.9879 |
|||
|- |
|||
! scope="row" | ''Q''(2.5) |
|||
| 0.006209665 || 1/161.0393 |
|||
|- |
|||
! scope="row" | ''Q''(2.6) |
|||
| 0.004661188 || 1/214.5376 |
|||
|- |
|||
! scope="row" | ''Q''(2.7) |
|||
| 0.003466974 || 1/288.4360 |
|||
|- |
|||
! scope="row" | ''Q''(2.8) |
|||
| 0.002555130 || 1/391.3695 |
|||
|- |
|||
! scope="row" | ''Q''(2.9) |
|||
| 0.001865813 || 1/535.9593 |
|||
|} |
|||
{{col-4}} |
{{col-4}} |
||
{|class="wikitable" |
|||
''Q''(3.0) = 0.001349898<br/> |
|||
! scope="row" | ''Q''(3.0) |
|||
''Q''(3.1) = 0.000967603<br/> |
|||
| 0.001349898 || 1/740.7967 |
|||
''Q''(3.2) = 0.000687138<br/> |
|||
|- |
|||
''Q''(3.3) = 0.000483424<br/> |
|||
! scope="row" | ''Q''(3.1) |
|||
''Q''(3.4) = 0.000336929<br/> |
|||
| 0.000967603 || 1/1033.4815 |
|||
''Q''(3.5) = 0.000232629<br/> |
|||
|- |
|||
''Q''(3.6) = 0.000159109<br/> |
|||
! scope="row" | ''Q''(3.2) |
|||
''Q''(3.7) = 0.000107800<br/> |
|||
| 0.000687138 || 1/1455.3119 |
|||
''Q''(3.8) = 0.000072348<br/> |
|||
|- |
|||
''Q''(3.9) = 0.000048096<br/> |
|||
! scope="row" | ''Q''(3.3) |
|||
''Q''(4.0) = 0.000031671<br/> |
|||
| 0.000483424 || 1/2068.5769 |
|||
|- |
|||
! scope="row" | ''Q''(3.4) |
|||
| 0.000336929 || 1/2967.9820 |
|||
|- |
|||
! scope="row" | ''Q''(3.5) |
|||
| 0.000232629 || 1/4298.6887 |
|||
|- |
|||
! scope="row" | ''Q''(3.6) |
|||
| 0.000159109 || 1/6285.0158 |
|||
|- |
|||
! scope="row" | ''Q''(3.7) |
|||
| 0.000107800 || 1/9276.4608 |
|||
|- |
|||
! scope="row" | ''Q''(3.8) |
|||
| 0.000072348 || 1/13822.0738 |
|||
|- |
|||
! scope="row" | ''Q''(3.9) |
|||
| 0.000048096 || 1/20791.6011 |
|||
|- |
|||
! scope="row" | ''Q''(4.0) |
|||
| 0.000031671 || 1/31574.3855 |
|||
|} |
|||
{{col-end}} |
{{col-end}} |
||
== Generalization to high dimensions == |
|||
== See also == |
|||
The ''Q''-function can be generalized to higher dimensions:<ref>{{cite journal|last1=Savage|first1=I. R.|title=Mills ratio for multivariate normal distributions|journal=Journal of Research of the National Bureau of Standards Section B|date=1962|volume=66|issue=3|pages=93–96|doi=10.6028/jres.066B.011|zbl=0105.12601|doi-access=free}}</ref> |
|||
* [[Error function]] |
|||
:<math>Q(\mathbf{x})= \mathbb{P}(\mathbf{X}\geq \mathbf{x}),</math> |
|||
where <math>\mathbf{X}\sim \mathcal{N}(\mathbf{0},\, \Sigma) </math> follows the multivariate normal distribution with covariance <math>\Sigma </math> and the threshold is of the form |
|||
<math>\mathbf{x}=\gamma\Sigma\mathbf{l}^*</math> for some positive vector <math> \mathbf{l}^*>\mathbf{0}</math> and positive constant <math>\gamma>0</math>. As in the one dimensional case, there is no simple analytical formula for the ''Q''-function. Nevertheless, the ''Q''-function can be [http://www.mathworks.com/matlabcentral/fileexchange/53796 approximated arbitrarily well] as <math>\gamma</math> becomes larger and larger.<ref>{{cite journal|last1=Botev|first1=Z. I.|title=The normal law under linear restrictions: simulation and estimation via minimax tilting|journal=Journal of the Royal Statistical Society, Series B|volume=79|pages=125–148|date=2016|doi=10.1111/rssb.12162|arxiv=1603.04166|bibcode=2016arXiv160304166B|s2cid=88515228}}</ref><ref name="bmc17">{{cite book |chapter=Logarithmically efficient estimation of the tail of the multivariate normal distribution |last1=Botev |first1=Z. I. |last2=Mackinlay |first2=D. |last3=Chen |first3=Y.-L. |date=2017 |publisher=IEEE |isbn=978-1-5386-3428-8 |title= 2017 Winter Simulation Conference (WSC)|pages=1903–191 |doi= 10.1109/WSC.2017.8247926 |s2cid=4626481 }} |
|||
</ref> |
|||
== References == |
== References == |
||
Line 121: | Line 281: | ||
[[Category:Normal distribution]] |
[[Category:Normal distribution]] |
||
[[Category:Special functions]] |
[[Category:Special functions]] |
||
[[Category:Functions related to probability distributions]] |
|||
[[Category:Articles containing proofs]] |
[[Category:Articles containing proofs]] |
Latest revision as of 04:08, 18 June 2024
In statistics, the Q-function is the tail distribution function of the standard normal distribution.[1][2] In other words, is the probability that a normal (Gaussian) random variable will obtain a value larger than standard deviations. Equivalently, is the probability that a standard normal random variable takes a value larger than .
If is a Gaussian random variable with mean and variance , then is standard normal and
where .
Other definitions of the Q-function, all of which are simple transformations of the normal cumulative distribution function, are also used occasionally.[3]
Because of its relation to the cumulative distribution function of the normal distribution, the Q-function can also be expressed in terms of the error function, which is an important function in applied mathematics and physics.
Definition and basic properties
[edit]Formally, the Q-function is defined as
Thus,
where is the cumulative distribution function of the standard normal Gaussian distribution.
The Q-function can be expressed in terms of the error function, or the complementary error function, as[2]
An alternative form of the Q-function known as Craig's formula, after its discoverer, is expressed as:[4]
This expression is valid only for positive values of x, but it can be used in conjunction with Q(x) = 1 − Q(−x) to obtain Q(x) for negative values. This form is advantageous in that the range of integration is fixed and finite.
Craig's formula was later extended by Behnad (2020)[5] for the Q-function of the sum of two non-negative variables, as follows:
Bounds and approximations
[edit]- The Q-function is not an elementary function. However, it can be upper and lower bounded as,[6][7]
- where is the density function of the standard normal distribution, and the bounds become increasingly tight for large x.
- Using the substitution v =u2/2, the upper bound is derived as follows:
- Similarly, using and the quotient rule,
- Solving for Q(x) provides the lower bound.
- The geometric mean of the upper and lower bound gives a suitable approximation for :
- Tighter bounds and approximations of can also be obtained by optimizing the following expression [7]
- For , the best upper bound is given by and with maximum absolute relative error of 0.44%. Likewise, the best approximation is given by and with maximum absolute relative error of 0.27%. Finally, the best lower bound is given by and with maximum absolute relative error of 1.17%.
- The Chernoff bound of the Q-function is
- Improved exponential bounds and a pure exponential approximation are [8]
- The above were generalized by Tanash & Riihonen (2020),[9] who showed that can be accurately approximated or bounded by
- In particular, they presented a systematic methodology to solve the numerical coefficients that yield a minimax approximation or bound: , , or for . With the example coefficients tabulated in the paper for , the relative and absolute approximation errors are less than and , respectively. The coefficients for many variations of the exponential approximations and bounds up to have been released to open access as a comprehensive dataset.[10]
- Another approximation of for is given by Karagiannidis & Lioumpas (2007)[11] who showed for the appropriate choice of parameters that
- The absolute error between and over the range is minimized by evaluating
- Using and numerically integrating, they found the minimum error occurred when which gave a good approximation for
- Substituting these values and using the relationship between and from above gives
- Alternative coefficients are also available for the above 'Karagiannidis–Lioumpas approximation' for tailoring accuracy for a specific application or transforming it into a tight bound.[12]
- A tighter and more tractable approximation of for positive arguments is given by López-Benítez & Casadevall (2011)[13] based on a second-order exponential function:
- The fitting coefficients can be optimized over any desired range of arguments in order to minimize the sum of square errors (, , for ) or minimize the maximum absolute error (, , for ). This approximation offers some benefits such as a good trade-off between accuracy and analytical tractability (for example, the extension to any arbitrary power of is trivial and does not alter the algebraic form of the approximation).
Inverse Q
[edit]The inverse Q-function can be related to the inverse error functions:
The function finds application in digital communications. It is usually expressed in dB and generally called Q-factor:
where y is the bit-error rate (BER) of the digitally modulated signal under analysis. For instance, for quadrature phase-shift keying (QPSK) in additive white Gaussian noise, the Q-factor defined above coincides with the value in dB of the signal to noise ratio that yields a bit error rate equal to y.
Values
[edit]The Q-function is well tabulated and can be computed directly in most of the mathematical software packages such as R and those available in Python, MATLAB and Mathematica. Some values of the Q-function are given below for reference.
|
|
|
|
Generalization to high dimensions
[edit]The Q-function can be generalized to higher dimensions:[14]
where follows the multivariate normal distribution with covariance and the threshold is of the form for some positive vector and positive constant . As in the one dimensional case, there is no simple analytical formula for the Q-function. Nevertheless, the Q-function can be approximated arbitrarily well as becomes larger and larger.[15][16]
References
[edit]- ^ "The Q-function". cnx.org. Archived from the original on 2012-02-29.
- ^ a b "Basic properties of the Q-function" (PDF). 2009-03-05. Archived from the original (PDF) on 2009-03-25.
- ^ Normal Distribution Function – from Wolfram MathWorld
- ^ Craig, J.W. (1991). "A new, simple and exact result for calculating the probability of error for two-dimensional signal constellations" (PDF). MILCOM 91 - Conference record. pp. 571–575. doi:10.1109/MILCOM.1991.258319. ISBN 0-87942-691-8. S2CID 16034807.
- ^ Behnad, Aydin (2020). "A Novel Extension to Craig's Q-Function Formula and Its Application in Dual-Branch EGC Performance Analysis". IEEE Transactions on Communications. 68 (7): 4117–4125. doi:10.1109/TCOMM.2020.2986209. S2CID 216500014.
- ^ Gordon, R.D. (1941). "Values of Mills' ratio of area to bounding ordinate and of the normal probability integral for large values of the argument". Ann. Math. Stat. 12: 364–366.
- ^ a b Borjesson, P.; Sundberg, C.-E. (1979). "Simple Approximations of the Error Function Q(x) for Communications Applications". IEEE Transactions on Communications. 27 (3): 639–643. doi:10.1109/TCOM.1979.1094433.
- ^ Chiani, M.; Dardari, D.; Simon, M.K. (2003). "New exponential bounds and approximations for the computation of error probability in fading channels" (PDF). IEEE Transactions on Wireless Communications. 24 (5): 840–845. doi:10.1109/TWC.2003.814350.
- ^ Tanash, I.M.; Riihonen, T. (2020). "Global minimax approximations and bounds for the Gaussian Q-function by sums of exponentials". IEEE Transactions on Communications. 68 (10): 6514–6524. arXiv:2007.06939. doi:10.1109/TCOMM.2020.3006902. S2CID 220514754.
- ^ Tanash, I.M.; Riihonen, T. (2020). "Coefficients for Global Minimax Approximations and Bounds for the Gaussian Q-Function by Sums of Exponentials [Data set]". Zenodo. doi:10.5281/zenodo.4112978.
- ^ Karagiannidis, George; Lioumpas, Athanasios (2007). "An Improved Approximation for the Gaussian Q-Function" (PDF). IEEE Communications Letters. 11 (8): 644–646. doi:10.1109/LCOMM.2007.070470. S2CID 4043576.
- ^ Tanash, I.M.; Riihonen, T. (2021). "Improved coefficients for the Karagiannidis–Lioumpas approximations and bounds to the Gaussian Q-function". IEEE Communications Letters. 25 (5): 1468–1471. arXiv:2101.07631. doi:10.1109/LCOMM.2021.3052257. S2CID 231639206.
- ^ Lopez-Benitez, Miguel; Casadevall, Fernando (2011). "Versatile, Accurate, and Analytically Tractable Approximation for the Gaussian Q-Function" (PDF). IEEE Transactions on Communications. 59 (4): 917–922. doi:10.1109/TCOMM.2011.012711.100105. S2CID 1145101.
- ^ Savage, I. R. (1962). "Mills ratio for multivariate normal distributions". Journal of Research of the National Bureau of Standards Section B. 66 (3): 93–96. doi:10.6028/jres.066B.011. Zbl 0105.12601.
- ^ Botev, Z. I. (2016). "The normal law under linear restrictions: simulation and estimation via minimax tilting". Journal of the Royal Statistical Society, Series B. 79: 125–148. arXiv:1603.04166. Bibcode:2016arXiv160304166B. doi:10.1111/rssb.12162. S2CID 88515228.
- ^ Botev, Z. I.; Mackinlay, D.; Chen, Y.-L. (2017). "Logarithmically efficient estimation of the tail of the multivariate normal distribution". 2017 Winter Simulation Conference (WSC). IEEE. pp. 1903–191. doi:10.1109/WSC.2017.8247926. ISBN 978-1-5386-3428-8. S2CID 4626481.