This is an old revision of this page, as edited by 84.249.21.176(talk) at 12:00, 7 August 2020(new approximations and bounds added). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Revision as of 12:00, 7 August 2020 by 84.249.21.176(talk)(new approximations and bounds added)
If is a Gaussian random variable with mean and variance , then is standard normal and
where .
Other definitions of the Q-function, all of which are simple transformations of the normal cumulative distribution function, are also used occasionally.[3]
Because of its relation to the cumulative distribution function of the normal distribution, the Q-function can also be expressed in terms of the error function, which is an important function in applied mathematics and physics.
The Q-function can be expressed in terms of the error function, or the complementary error function, as[2]
An alternative form of the Q-function known as Craig's formula, after its discoverer, is expressed as:[4]
This expression is valid only for positive values of x, but it can be used in conjunction with Q(x) = 1 − Q(−x) to obtain Q(x) for negative values. This form is advantageous in that the range of integration is fixed and finite.
Craig's formula was later extended by Behnad (2020)[5] for the Q-function of the sum of two non-negative variables, as follows:
Bounds and approximations
The Q-function is not an elementary function. However, the bounds, where is the density function of the standard normal distribution,[6]
become increasingly tight for large x, and are often useful.
Using the substitutionv =u2/2, the upper bound is derived as follows:
The geometric mean of the upper and lower bound gives a suitable approximation for Q(x):
Tighter bounds and approximations of the Q(x) can also be obtained by optimizing the following expression [6]
For , the best upper bound is given by and with maximum absolute relative error of 0.44%. Likewise, the best approximation is given by and with maximum absolute relative error of 0.27%. Finally, the best lower bound is given by and with maximum absolute relative error of 1.17%.
Improved exponential bounds and a pure exponential approximation are [7]
The above were generalized by Tanash & Riihonen (2020)[8], who showed that can be accurately approximated or bounded by
In particular, they presented a systematic methodology to solve the numerical coefficients that yield a minimax approximation or bound: , , or for . With the example coefficients tabulated in the paper for , the relative and absolute approximation errors are less than and , respectively.
Another approximation of for is given by Karagiannidis & Lioumpas (2007)[9] who showed for the appropriate choice of parameters that
The absolute error between and over the range is minimized by evaluating
Using and numerically integrating, they found the minimum error occurred when which gave a good approximation for
Substituting these values and using the relationship between and from above gives
A tighter and more tractable approximation of for positive arguments is given by López-Benítez & Casadevall (2011)[10] based on a second-order exponential function:
The fitting coefficients can be optimized over any desired range of arguments in order to minimize the sum of square errors (, , for ) or minimize the maximum absolute error (, , for ). This approximation offers some benefits such as a good trade-off between accuracy and analytical tractability (for example, the extension to any arbitrary power of is trivial and does not alter the algebraic form of the approximation).
The function finds application in digital communications. It is usually expressed in dB and generally called Q-factor:
where y is the bit-error rate (BER) of the digitally modulated signal under analysis. For instance, for QPSK in additive white Gaussian noise, the Q-factor defined above coincides with the value in dB of the signal to noise ratio that yields a bit error rate equal to y.
Values
The Q-function is well tabulated and can be computed directly in most of the mathematical software packages such as R and those available in Python, MATLAB and Mathematica. Some values of the Q-function are given below for reference.
Q(0.0)
0.500000000
1/2.0000
Q(0.1)
0.460172163
1/2.1731
Q(0.2)
0.420740291
1/2.3768
Q(0.3)
0.382088578
1/2.6172
Q(0.4)
0.344578258
1/2.9021
Q(0.5)
0.308537539
1/3.2411
Q(0.6)
0.274253118
1/3.6463
Q(0.7)
0.241963652
1/4.1329
Q(0.8)
0.211855399
1/4.7202
Q(0.9)
0.184060125
1/5.4330
Q(1.0)
0.158655254
1/6.3030
Q(1.1)
0.135666061
1/7.3710
Q(1.2)
0.115069670
1/8.6904
Q(1.3)
0.096800485
1/10.3305
Q(1.4)
0.080756659
1/12.3829
Q(1.5)
0.066807201
1/14.9684
Q(1.6)
0.054799292
1/18.2484
Q(1.7)
0.044565463
1/22.4389
Q(1.8)
0.035930319
1/27.8316
Q(1.9)
0.028716560
1/34.8231
Q(2.0)
0.022750132
1/43.9558
Q(2.1)
0.017864421
1/55.9772
Q(2.2)
0.013903448
1/71.9246
Q(2.3)
0.010724110
1/93.2478
Q(2.4)
0.008197536
1/121.9879
Q(2.5)
0.006209665
1/161.0393
Q(2.6)
0.004661188
1/214.5376
Q(2.7)
0.003466974
1/288.4360
Q(2.8)
0.002555130
1/391.3695
Q(2.9)
0.001865813
1/535.9593
Q(3.0)
0.001349898
1/740.7967
Q(3.1)
0.000967603
1/1033.4815
Q(3.2)
0.000687138
1/1455.3119
Q(3.3)
0.000483424
1/2068.5769
Q(3.4)
0.000336929
1/2967.9820
Q(3.5)
0.000232629
1/4298.6887
Q(3.6)
0.000159109
1/6285.0158
Q(3.7)
0.000107800
1/9276.4608
Q(3.8)
0.000072348
1/13822.0738
Q(3.9)
0.000048096
1/20791.6011
Q(4.0)
0.000031671
1/31574.3855
Generalization to high dimensions
The Q-function can be generalized to higher dimensions:[11]
where follows the multivariate normal distribution with covariance and the threshold is of the form
for some positive vector and positive constant . As in the one dimensional case, there is no simple analytical formula for the Q-function. Nevertheless, the Q-function can be approximated arbitrarily well as becomes larger and larger.[12][13]
^ abBorjesson, P.; Sundberg, C.-E. (1979). "Simple Approximations of the Error Function Q(x) for Communications Applications". IEEE Transactions on Communications. 27 (3): 639–643. doi:10.1109/TCOM.1979.1094433.
^Savage, I. R. (1962). "Mills ratio for multivariate normal distributions". Journal of Research of the National Bureau of Standards Section B. 66: 93–96. Zbl0105.12601.
^Botev, Z. I.; Mackinlay, D.; Chen, Y.-L. (2017). "Logarithmically efficient estimation of the tail of the multivariate normal distribution". 2017 Winter Simulation Conference (WSC). IEEE. pp. 1903–191. doi:10.1109/WSC.2017.8247926. ISBN978-1-5386-3428-8.