Jump to content

Talk:Rayleigh quotient

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Interpretation from the eigenvalue equation

[edit]

I found it very helpful to see that the Rayleigh quotient can be derived from the eigenvalue equation (left multiply by the (conjugate) transpose of x, then divide both sides by x*x). And thus the Rayleigh quotient is the approximate eigenvalue of an approximate eigenvector. Although this interpretation is quite trivial, I think it would fit Wikipedia well. — Preceding unsigned comment added by 94.210.213.220 (talk) 23:27, 7 December 2016 (UTC)[reply]

Special case of covariance matrices

[edit]
  • Why can \Sigma be written as A'A?
  • Is A Hermitian?
  • Here are my thoughts: because \Sigma is a covariance matrix, it is positive semi-definite, and hence can be decomposed by Cholesky decomposition into A' and A, which are lower and upper triangular respectively. So the Cholesky decomposition gives A' and A which are not Hermitian, so why use the same letter A as above?
  • Does this apply to only covariance matrices and not all positive semi-definite symmetric matrices, or are they the same thing?
  • The following is not a sentence and needs help from someone who knows what is trying to be expressed: "If a vector x maximizes \rho, then any vector kx (for k \neq 0) also maximizes it, one can reduce to the Lagrange problem of maximizing [summation] under the constraint [summation]." Also, why the constraint?
  • There's a proof in here and I'm not sure why. It would be helpful if one would write explicitly what is being proved beforehand, and why the proof is being written.
  • This section needs an intro that gives it context with respect to other mathematical techniques, and it needs to explain why and what is being done.141.214.17.5 (talk) 16:20, 16 December 2008 (UTC)[reply]
I have tweaked the prose a bit, which I hope clarifies most of the points above.
In answer to one specific point: yes, one could apply the argument to any symmetric matrix M using its Cholesky factors L and LT.
The specialisation here to covariance matrices is because this is a particularly important application: the argument establishes the properties of Principal Components Analysis (PCA), and its usefulness.
The empirical covariance matrix is defined to be ATA (or to be exact, a linear scaling of it), where A is the data matrix, so that gives a natural starting point. Jheald (talk) 09:56, 17 June 2013 (UTC)[reply]

Shouldt this article be merged with Min-max theorem, which treats the same topic in more depth? Kjetil B Halvorsen 17:12, 11 February 2014 (UTC) — Preceding unsigned comment added by Kjetil1001 (talkcontribs)