Spectral theorem: Difference between revisions

Content deleted Content added
Undid revision 1077745741 by 2601:19B:700:CEF0:31A2:C7E5:2CC1:16FB (talk) the removed paragraph says nothing about eigenvalues of A itself
Tags: Undo Reverted
fixed numerous minor math typesetting errors
 
(35 intermediate revisions by 21 users not shown)
Line 1:
{{Short description|Result about when a matrix can be diagonalized}}
In [[mathematics]], particularly [[linear algebra]] and [[functional analysis]], a '''spectral theorem''' is a result about when a [[linear operator]] or [[matrix (mathematics)|matrix]] can be [[Diagonalizable matrix|diagonalized]] (that is, represented as a [[diagonal matrix]] in some basis). This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on [[finite-dimensional vector spacesspace]]s but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of [[linear operator]]s that can be modeled by [[multiplication operator]]s, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative [[C*-algebra]]s. See also [[spectral theory]] for a historical perspective.
 
Examples of operators to which the spectral theorem applies are [[self-adjoint operator]]s or more generally [[normal operator]]s on [[Hilbert space]]s.
Line 6:
The spectral theorem also provides a [[canonical form|canonical]] decomposition, called the '''[[eigendecomposition of a matrix|spectral decomposition]]''', of the underlying vector space on which the operator acts.
 
[[Augustin-Louis Cauchy]] proved the spectral theorem for [[Symmetric matrix|symmetric matrices]], i.e., that every real, symmetric matrix is diagonalizable. In addition, Cauchy was the first to be systematic about determinants[[determinant]]s.<ref>{{cite journal| doi=10.1016/0315-0860(75)90032-4 | volume=2 | title=Cauchy and the spectral theory of matrices | year=1975 | journal=Historia Mathematica | pages=1–29 | last1 = Hawkins | first1 = Thomas| doi-access=free }}</ref><ref>[http://www.mathphysics.com/opthy/OpHistory.html A Short History of Operator Theory by Evans M. Harrell II]</ref> The spectral theorem as generalized by [[John von Neumann]] is today perhaps the most important result of [[operator theory]].
 
This article mainly focuses on the simplest kind of spectral theorem, that for a [[self-adjoint]] operator on a Hilbert space. However, as noted above, the spectral theorem also holds for normal operators on a Hilbert space.
 
== Finite-dimensional case ==
<!-- This section is linked from [[Singular value decomposition]] -->
 
=== Hermitian maps and Hermitian matrices ===
We begin by considering a [[Hermitian matrix]] on <math>\mathbb{C}^n</math> (but the following discussion will be adaptable to the more restrictive case of [[symmetric matrix|symmetric matrices]] on {{nobr|<math>\mathbb{R}^n</math>).}} We consider a [[Hermitian operator|Hermitian map]] {{math|''A''}} on a finite-dimensional [[complex number|complex]] [[inner product space]] {{math|''V''}} endowed with a [[Definite bilinear form|positive definite]] [[sesquilinear form|sesquilinear]] [[inner product]] <math>\ \langle \cdot, \cdot \rangle ~.</math>. The Hermitian condition on <math>\ A\ </math> means that for all {{math|''x'', ''y'' ∈ ''V''}},
:<math display="block">\ \langle\ A x, y\ \rangle = \langle\ x, A y\ \rangle ~.</math>
 
An equivalent condition is that {{math| ''A''{{sup|*}} {{=}} ''A'' }}, where {{math| ''A''{{sup|*}} }} is the [[Hermitian conjugate]] of {{math|''A''}}. In the case that {{math|''A''}} is identified with a Hermitian matrix, the matrix of {{math| ''A''{{sup|*}} }} is equal to its [[conjugate transpose]]. (If {{math|''A''}} is a [[real matrix]], then this is equivalent to {{math| ''A''{{sup|T}} {{=}} ''A''}}, that is, {{math|''A''}} is a [[symmetric matrix]].)
:<math> \langle A x, y \rangle = \langle x, A y \rangle.</math>
 
An equivalentThis condition isimplies that all eigenvalues of a Hermitian map are real: To see this, it is enough to apply it to the case when {{math|1= ''Ax''<sup>*</sup> {{= ''A''}}, where {{math|''Ay''<sup>*</sup>}} is thean eigenvector. (Recall that an [[Hermitian conjugateeigenvector]] of a linear map {{math|''A''}}. Inis thea casenon-zero thatvector {{math|''Av''}} issuch identified with a Hermitian matrix, the matrix ofthat {{math| ''A v''<sup>*</sup> {{=}} can be identified with its [[conjugate transpose]]. (If {{math|''Aλv''}} isfor asome [[real matrix]], then this is equivalent toscalar {{math|1=''A''<sup>T</sup> = ''Aλ''}},. thatThe is,value {{math|''Aλ''}} is athe corresponding [[eigenvalue]]. Moreover, the [[eigenvalues]] are roots of the [[symmetriccharacteristic matrixpolynomial]].)
 
'''Theorem'''.{{math theorem | math_statement = If {{math|''A''}} is Hermitian on {{math|''V''}}, then there exists an [[orthonormal basis]] of {{math|''V''}} consisting of eigenvectors of {{math|''A''}}. Each eigenvalue of {{math|''A''}} is real.}}
This condition implies that all eigenvalues of a Hermitian map are real: it is enough to apply it to the case when {{math|1=''x'' = ''y''}} is an eigenvector. (Recall that an [[eigenvector]] of a linear map {{math|''A''}} is a (non-zero) vector {{math|''x''}} such that {{math|1=''Ax'' = ''λx''}} for some scalar {{math|''λ''}}. The value {{math|''λ''}} is the corresponding [[eigenvalue]]. Moreover, the [[eigenvalues]] are roots of the [[characteristic polynomial]].)
 
'''Theorem'''. If {{math|''A''}} is Hermitian, then there exists an [[orthonormal basis]] of {{math|''V''}} consisting of eigenvectors of {{math|''A''}}. Each eigenvalue is real.
 
We provide a sketch of a proof for the case where the underlying field of scalars is the [[complex number]]s.
 
By the [[fundamental theorem of algebra]], applied to the [[characteristic polynomial]] of {{math|''A''}}, there is at least one complex eigenvalue {{math| ''λ''<{{sub>|1</sub>}} }} and corresponding eigenvector {{nobr|{{math| ''ev''<{{sub>|1</sub>}} }} ,}} which must by definition be non-zero. Then since
: <math display="block">\ \lambda_1\ \langle\ e_1v_1, e_1v_1\ \rangle = \langle\ A (e_1v_1), e_1v_1\ \rangle = \langle\ e_1v_1, A(e_1v_1)\ \rangle = \bar\lambda_1\ \langle\ e_1v_1, e_1v_1\ \rangle\ ,</math>
we find that {{math| ''λ''<{{sub>|1</sub>}} }} is real. Now consider the space <math>\ \mathcal{K}^{math|n-1=''K''} = span\text{''e''<sub>1</sub>span}<sup>⊥\left(\ v_1\ \right)^\perp\ ,</supmath>}}, the [[orthogonal complement]] of {{nobr|{{math| ''ev''<sub>1</sub>}} .}} By Hermiticity, {{<math|''>\ \mathcal{K''}^{n-1}\ </math> is an [[invariant subspace]] of {{math|''A''}}. To see that, consider any <math>\ k \in \mathcal{K}^{n-1}</math> so that <math>\ \langle\ k, v_1\ \rangle = 0\ </math> by definition of <math>\mathcal{K}^{n-1} ~.</math> To satisfy invariance, we need to check if <math>\ A(k) \in \mathcal{K}^{n-1} ~.</math> This is true because <math>\ \langle\ A(k), v_1\ \rangle = \langle\ k, A(v_1)\ \rangle = \langle\ k, \lambda_1\ v_1\ \rangle = 0 ~.</math> Applying the same argument to {{<math|''>\ \mathcal{K''}^{n-1}\ </math> shows that {{math|''A''}} has anat eigenvectorleast {{one real eigenvalue <math|''e''>\lambda_2<sub/math>2 and corresponding eigenvector <math>\ v_2 \in \mathcal{K}^{n-1} \perp v_1 ~.</submath> This ''can be used to build another invariant subspace <math>\ \mathcal{K''}^{n-2} = \text{span}\left(\ \{v_1, v_2\}\ \right)^\perp ~.</math> Finite induction then finishes the proof.
 
The spectral theorem holds also for symmetric maps on finite-dimensional real inner product spaces, but the existence of an eigenvector does not follow immediately from the [[fundamental theorem of algebra]]. To prove this, consider {{math|''A''}} as a Hermitian matrix and use the fact that all eigenvalues of a Hermitian matrix are real.
 
The matrix representation of {{math|''A''}} in a basis of eigenvectors is diagonal, and by the construction the proof gives a basis of mutually orthogonal eigenvectors; by choosing them to be unit vectors one obtains an orthonormal basis of eigenvectors. {{math|''A''}} can be written as a linear combination of pairwise orthogonal projections, called its '''spectral decomposition'''. Let
: <math display="block">\ V_{\lambda} = \{\ v \in V\ :\ A\ v = \lambda\ v\ \}\ </math>
be the eigenspace corresponding to an eigenvalue {{<math|''λ''}}>\ \lambda ~.</math> Note that the definition does not depend on any choice of specific eigenvectors. In general, {{math|''V''}} is the orthogonal direct sum of the spaces {{math|''V''<submath>''λ''\ V_{\lambda}\ </submath>}} where the index<math>\ \lambda\ </math> ranges over eigenvaluesthe [[Spectrum of a matrix|spectrum]] of <math>\ A ~.</math>
 
When the matrix being decomposed is Hermitian, the spectral decomposition is a special case of the [[Schur decomposition]] (see the proof in case of [[#Normal matrices|normal matrices]] below).
: <math>V_\lambda = \{v \in V: A v = \lambda v\}</math>
 
be the eigenspace corresponding to an eigenvalue {{math|''λ''}}. Note that the definition does not depend on any choice of specific eigenvectors. {{math|''V''}} is the orthogonal direct sum of the spaces {{math|''V''<sub>''λ''</sub>}} where the index ranges over eigenvalues.
 
In other words, if {{math|''P''<sub>''λ''</sub>}} denotes the [[Orthogonal projection#Orthogonal projections|orthogonal projection]] onto {{math|''V''<sub>''λ''</sub>}}, and {{math|''λ''<sub>1</sub>, ..., ''λ''<sub>''m''</sub>}} are the eigenvalues of {{math|''A''}}, then the spectral decomposition may be written as
: <math>A = \lambda_1 P_{\lambda_1} + \cdots + \lambda_m P_{\lambda_m}.</math>
 
If the spectral decomposition of ''A'' is <math>A = \lambda_1 P_1 + \cdots + \lambda_m P_m</math>, then <math>A^2 = (\lambda_1)^2 P_1 + \cdots + (\lambda_m)^2 P_m</math> and <math>\mu A = \mu \lambda_1 P_1 + \cdots + \mu \lambda_m P_m</math> for any scalar <math>\mu.</math> It follows that for any polynomial {{mvar|f}} one has
: <math>f(A) = f(\lambda_1) P_1 + \cdots + f(\lambda_m) P_m.</math>
 
The=== spectralSpectral decomposition is a special case of both the [[Schur decomposition]] and the [[singular value decomposition]]. ===
 
The spectral decomposition is a special case of the [[singular value decomposition]], which states that any matrix <math>\ A \in \mathbb{C}^{m \times n}\ </math> can be expressed as
<math>\ A = U\ \Sigma\ V^{*}\ ,</math> where <math>\ U \in \mathbb{C}^{m \times m}\ </math> and <math>\ V \in \mathbb{C}^{n \times n}\ </math> are [[unitary matrices]] and <math>\ \Sigma \in \mathbb{R}^{m \times n}\ </math> is a diagonal matrix. The diagonal entries of <math>\ \Sigma\ </math> are uniquely determined by <math>\ A\ </math> and are known as the [[singular values]] of <math>\ A ~.</math> If <math>\ A\ </math> is Hermitian, then <math>\ A^* = A\ </math> and <math>\ V\ \Sigma\ U^* = U\ \Sigma\ V^*\ </math> which implies <math>\ U = V ~.</math>
=== Normal matrices ===
{{main|Normal matrix}}
The spectral theorem extends to a more general class of matrices. Let {{math|''A''}} be an operator on a finite-dimensional inner product space. {{math|''A''}} is said to be [[normal matrix|normal]] if {{nobr|{{math|1= ''A''<{{sup>|*</sup>}} ''A'' {{=}} ''AAA A''<{{sup>|*</sup>}}. }} .}}

One can show that {{math|''A''}} is normal if and only if it is unitarily diagonalizable. Proof: Byusing the [[Schur decomposition]],. weThat can writeis, any matrix can be written as {{nobr|{{math|1= ''A'' {{=}} ''UTUU T U''<{{sup>|*</sup>}} }} ,}} where {{math| ''U'' }} is unitary and {{math| ''T'' }} is [[upper- triangular]].
If {{math|''A''}} is normal, then one sees that {{nobr|{{math|1= ''TTT T''<sup>*</sup> {{=}} ''T''<{{sup>|*</sup>}} ''T''}} .}} Therefore, {{math|''T''}} must be diagonal since a normal upper triangular matrix is diagonal (see [[normal matrix#Consequences|normal matrix]]). The converse is obvious.
 
In other words, {{math|''A''}} is normal if and only if there exists a [[unitary matrix]] {{math|''U''}} such that
: <math display="block">\ A = U\ D\ U^*\ ,</math>
 
: <math>A = U D U^*,</math>
 
where {{math|''D''}} is a [[diagonal matrix]]. Then, the entries of the diagonal of {{math|''D''}} are the [[eigenvalue]]s of {{math|''A''}}. The column vectors of {{math|''U''}} are the eigenvectors of {{math|''A''}} and they are orthonormal. Unlike the Hermitian case, the entries of {{math|''D''}} need not be real.
 
It's also easy to see that the [[singular value decomposition]] of a normal matrix <math>\mathbf{A} = \mathbf{U} \boldsymbol{\Sigma} \mathbf{V}^*</math> has <math>\mathbf{U} = \mathbf{V}</math>, since the columns of {{math|'''V'''}} (right-singular vectors) are [[eigenvectors]] of {{math|'''A'''<sup>⁎</sup>'''A'''}} and the columns of {{math|'''U'''}} (left-singular vectors) are eigenvectors of {{math|'''AA'''<sup>⁎</sup>}}.
 
== Compact self-adjoint operators ==
{{mainsee also|Compact operator on Hilbert space#Spectral theorem}}
In the more general setting of Hilbert spaces, which may have an infinite dimension, the statement of the spectral theorem for [[compact operator|compact]] [[self-adjoint operators]] is virtually the same as in the finite-dimensional case.
 
'''Theorem'''.{{math theorem | math_statement =Suppose {{math|''A''}} is a compact self-adjoint operator on a (real or complex) Hilbert space {{math|''V''}}. Then there is an [[orthonormal basis]] of {{math|''V''}} consisting of eigenvectors of {{math|''A''}}. Each eigenvalue is real.}}
 
As for Hermitian matrices, the key point is to prove the existence of at least one nonzero eigenvector. One cannot rely on determinants to show existence of eigenvalues, but one can use a maximization argument analogous to the variational characterization of eigenvalues.
 
If the compactness assumption is removed, then it is ''not'' true that every self-adjoint operator has eigenvectors. For example, the multiplication operator <math>M_{x}</math> on <math>L^2([0,1])</math> which takes each <math>\psi(x) \in L^2([0,1])</math> to <math>x\psi(x)</math> is bounded and self-adjoint, but has no eigenvectors. However, its spectrum, suitably defined, is still equal to <math>[0,1]</math>, see [[Spectrum_(functional_analysis)#Spectrum_of_a_bounded_operator| spectrum of bounded operator]].
 
== Bounded self-adjoint operators ==
Line 73 ⟶ 65:
===Possible absence of eigenvectors===
 
The next generalization we consider is that of [[bounded operatorSelf-adjoint_operator#Bounded_self-adjoint_operators|bounded]] self-adjoint operators]] on a Hilbert space. Such operators may have no eigenvalueseigenvectors: for instance let {{math|''A''}} be the operator of multiplication by {{math|''t''}} on <math>L^2([0,1])</math>, that is,<ref>{{harvnb|Hall|2013}} Section 6.1</ref>
:<math display="block"> [A \varphi](t) = t \varphi(t). \;</math>
 
This operator does not have any eigenvectors ''in'' <math>L^2([0,1])</math>, though it does have eigenvectors in a larger space. Namely the [[Distribution (mathematics)|distribution]] <math>\varphi(t)=\delta(t-t_0)</math>, where <math>\delta</math> is the [[Dirac delta function]], is an eigenvector when construed in an appropriate sense. The Dirac delta function is however not a function in the classical sense and does not lie in the Hilbert space {{math|''L''<sup>2</sup>[0, 1]}} or any other [[Banach space]]. Thus, the delta-functions are "generalized eigenvectors" of <math>A</math> but not eigenvectors in the usual sense.
:<math> [A \varphi](t) = t \varphi(t). \;</math>
 
This operator does not have any eigenvectors ''in'' <math>L^2([0,1])</math>, though it does have eigenvectors in a larger space. Namely the [[Distribution (mathematics)|distribution]] <math>\varphi(t)=\delta(t-t_0)</math>, where <math>\delta</math> is the [[Dirac delta function]], is an eigenvector when construed in an appropriate sense. The Dirac delta function is however not a function in the classical sense and does not lie in the Hilbert space {{math|''L''<sup>2</sup>[0, 1]}} or any other [[Banach space]]. Thus, the delta-functions are "generalized eigenvectors" of <math>A</math> but not eigenvectors in the usual sense.
 
===Spectral subspaces and projection-valued measures===
 
In the absence of (true) eigenvectors, one can look for subspacesa "spectral subspace" consisting of an ''almost eigenvectorseigenvector'', i.e, Ina closed subspace <math>V_E</math> of <math>H</math> associated with a [[Borel set]] <math>E \subset \sigma(A)</math> in the above[[Spectrum_(functional_analysis)|spectrum]] example,of <math>A</math>. This subspace can be thought of as the closed span of generalized eigenvectors for <math>A</math> with eigen''values'' in <math>E</math>.<ref>{{harvnb|Hall|2013}} Theorem 7.2.1</ref> In the above example, where <math> [A \varphi](t) = t \varphi(t), \;</math> we might consider the subspace of functions supported on a small interval <math>[a,a+\varepsilon]</math> inside <math>[0,1]</math>. This space is invariant under <math>A</math> and for any <math>\varphi</math> in this subspace, <math>A\varphi</math> is very close to <math>a\varphi</math>. In this approach to the spectral theorem, if <math>A</math> is a bounded self-adjoint operator, then one looks for large families of such "spectral subspaces".<ref>{{harvnb|Hall|2013}} Theorem 7.2.1</ref> Each subspace, in turn, is encoded by the associated projection operator, and the collection of all the subspaces is then represented by a [[projection-valued measure]].
 
One formulation of the spectral theorem expresses the operator {{math|''A''}} as an integral of the coordinate function over the operator's [[Eigenvector#Infinite dimensions|spectrum]] <math>\sigma(A)</math> with respect to a projection-valued measure.<ref>{{harvnb|Hall|2013}} Theorem 7.12</ref>
 
One formulation of the spectral theorem expresses the operator {{math|''A''}} as an integral of the coordinate function over the operator's [[Eigenvector#Infinite dimensions|spectrum]] <math>\sigma(A)</math> with respect to a projection-valued measure.<ref>{{harvnb|Hall|2013}} Theorem 7.12</ref>
: <math display="block"> A = \int_{\sigma(A)} \lambda \, d E_{\lambda}pi (\lambda).</math>
 
When the self-adjoint operator in question is [[compact operator|compact]], this version of the spectral theorem reduces to something similar to the finite-dimensional spectral theorem above, except that the operator is expressed as a finite or countably infinite linear combination of projections, that is, the measure consists only of atoms.
Line 93 ⟶ 83:
An alternative formulation of the spectral theorem says that every bounded self-adjoint operator is unitarily equivalent to a multiplication operator. The significance of this result is that multiplication operators are in many ways easy to understand.
 
{{math theorem|name='''Theorem'''.<ref>{{harvnb|Hall|2013}} Theorem 7.20</ref>|math_statement= Let {{math|''A''}} be a bounded self-adjoint operator on a Hilbert space {{math|''H''}}. Then there is a [[measure space]] {{math|(''X'', Σ, ''μ'')}} and a real-valued [[ess sup|essentially bounded]] measurable function {{math|''f''}} on {{math|''X''}} and a [[unitary operator]] {{math|''U'':''H'' → ''L''<sup>2</sup><sub>(''μX''</sub>(, ''Xμ'')}} such that
<math display="block"> U^* T U = A,</math>
where {{math|''T''}} is the [[multiplication operator]]:
Line 99 ⟶ 89:
and <math>\|T\| = \|f\|_\infty</math>.}}
 
The spectral theorem is the beginning of the vast research area of functional analysis called [[operator theory]]; see also the [[spectral measure#Spectral measure|spectral measure]].
 
There is also an analogous spectral theorem for bounded [[normal operator]]s on Hilbert spaces. The only difference in the conclusion is that now {{math|''f''}} may be complex-valued.
Line 113 ⟶ 103:
for some measure <math>\mu</math> and some family <math>\{H_{\lambda}\}</math> of Hilbert spaces. The measure <math>\mu</math> is uniquely determined by <math>A</math> up to measure-theoretic equivalence; that is, any two measure associated to the same <math>A</math> have the same sets of measure zero. The dimensions of the Hilbert spaces <math>H_{\lambda}</math> are uniquely determined by <math>A</math> up to a set of <math>\mu</math>-measure zero.}}
 
The spaces <math>H_{\lambda}</math> can be thought of as something like "eigenspaces" for <math>A</math>. Note, however, that unless the one-element set <math>{\lambda}</math> has positive measure, the space <math>H_{\lambda}</math> is not actually a subspace of the direct integral. Thus, the <math>H_{\lambda}</math>'s should be thought of as "generalized eigenspace"—that is, the elements of <math>H_{\lambda}</math> are "eigenvectors" that do not actually belong to the Hilbert space.
 
Although both the multiplication-operator and direct integral formulations of the spectral theorem express a self-adjoint operator as unitarily equivalent to a multiplication operator, the direct integral approach is more canonical. First, the set over which the direct integral takes place (the spectrum of the operator) is canonical. Second, the function we are multiplying by is canonical in the direct-integral approach: Simply the function <math>\lambda\mapsto\lambda</math>.
Line 120 ⟶ 110:
A vector <math>\varphi</math> is called a [[cyclic vector]] for <math>A</math> if the vectors <math>\varphi,A\varphi,A^2\varphi,\ldots</math> span a dense subspace of the Hilbert space. Suppose <math>A</math> is a bounded self-adjoint operator for which a cyclic vector exists. In that case, there is no distinction between the direct-integral and multiplication-operator formulations of the spectral theorem. Indeed, in that case, there is a measure <math>\mu</math> on the spectrum <math>\sigma(A)</math> of <math>A</math> such that <math>A</math> is unitarily equivalent to the "multiplication by <math>\lambda</math>" operator on <math>L^2(\sigma(A),\mu)</math>.<ref>{{harvnb|Hall|2013}} Lemma 8.11</ref> This result represents <math>A</math> simultaneously as a multiplication operator ''and'' as a direct integral, since <math>L^2(\sigma(A),\mu)</math> is just a direct integral in which each Hilbert space <math>H_{\lambda}</math> is just <math>\mathbb{C}</math>.
 
Not every bounded self-adjoint operator admits a cyclic vector; indeed, by the uniqueness in the direct integral decomposition, this can occur only when all the <math>H_{\lambda}</math>'s have dimension one. When this happens, we say that <math>A</math> has "simple spectrum" in the sense of [[Self-adjoint_operatoradjoint operator#Spectral_multiplicity_theorySpectral multiplicity theory|spectral multiplicity theory]]. That is, a bounded self-adjoint operator that admits a cyclic vector should be thought of as the infinite-dimensional generalization of a self-adjoint matrix with distinct eigenvalues (i.e., each eigenvalue has multiplicity one).
 
Although not every <math>A</math> admits a cyclic vector, it is easy to see that we can decompose the Hilbert space as a direct sum of invariant subspaces on which <math>A</math> has a cyclic vector. This observation is the key to the proofs of the multiplication-operator and direct-integral forms of the spectral theorem.
 
===Functional calculus===
One important application of the spectral theorem (in whatever form) is the idea of defining a [[functional calculus]]. That is, given a function <math>f</math> defined on the spectrum of <math>A</math>, we wish to define an operator <math>f(A)</math>. If <math>f</math> is simply a positive power, <math>f(x) = x^n</math>, then <math>f(A)</math> is just the <math>n\mathrm{th}</math>-th power of <math>A</math>, <math>A^n</math>. The interesting cases are where <math>f</math> is a nonpolynomial function such as a square root or an exponential. Either of the versions of the spectral theorem provides such a functional calculus.<ref>E.g., {{harvnb|Hall|2013}} Definition 7.13</ref> In the direct-integral version, for example, <math>f(A)</math> acts as the "multiplication by <math>f</math>" operator in the direct integral:
:<math display="block">[f(A)s](\lambda) = f(\lambda) s(\lambda).</math>.
That is to say, each space <math>H_{\lambda}</math> in the direct integral is a (generalized) eigenspace for <math>f(A)</math> with eigenvalue <math>f(\lambda)</math>.
 
== GeneralUnbounded self-adjoint operators ==
Many important linear operators which occur in [[Mathematical analysis|analysis]], such as [[differential operators]], are [[unbounded operator|unbounded]]. There is also a spectral theorem for [[self-adjoint operator]]s that applies in these cases. To give an example, every constant-coefficient differential operator is unitarily equivalent to a multiplication operator. Indeed, the unitary operator that implements this equivalence is the [[Fourier transform]]; the multiplication operator is a type of [[Multiplier (Fourier analysis)|Fourier multiplier]].
 
In general, spectral theorem for self-adjoint operators may take several equivalent forms.<ref>See Section 10.1 of {{harvnb|Hall|2013}}</ref> Notably, all of the formulations given in the previous section for bounded self-adjoint operators—the projection-valued measure version, the multiplication-operator version, and the direct-integral version—continue to hold for unbounded self-adjoint operators, with small technical modifications to deal with domain issues. Specifically, the only reason the multiplication operator <math>A</math> on <math>L^2([0,1])</math> is bounded, is due to the choice of domain <math>[0,1]</math>. The same operator on, e.g., <math>L^2(\mathbb{R})</math> would be unbounded.
 
The notion of "generalized eigenvectors" naturally extends to unbounded self-adjoint operators, as they are characterized as [[Probability_amplitude#Normalization|non-normalizable]] eigenvectors. Contrary to the case of [[Spectral_theorem#Spectral_subspaces_and_projection-valued_measures|almost eigenvectors]], however, the eigenvalues can be real or complex and, even if they are real, do not necessarily belong to the spectrum. Though, for self-adjoint operators there always exist a real subset of "generalized eigenvalues" such that the corresponding set of eigenvectors is [[Total_set|complete]].{{sfn|de la Madrid Modino|2001|pp=95-97}}
In general, spectral theorem for self-adjoint operators may take several equivalent forms.<ref>See Section 10.1 of {{harvnb|Hall|2013}}</ref> Notably, all of the formulations given in the previous section for bounded self-adjoint operators—the projection-valued measure version, the multiplication-operator version, and the direct-integral version—continue to hold for unbounded self-adjoint operators, with small technical modifications to deal with domain issues.
 
== See also ==
Line 150 ⟶ 142:
{{reflist}}
 
== References ==
{{Reflist}}
* [[Sheldon Axler]], ''Linear Algebra Done Right'', Springer Verlag, 1997
* {{citation | last = Hall |first = B.C. |title = Quantum Theory for Mathematicians|series=Graduate Texts in Mathematics|volume=267 | year = 2013 |publisher = Springer|bibcode = 2013qtm..book.....H |isbn=978-1461471158}}
* [[Paul Halmos]], [https://www.jstor.org/stable/2313117 "What Does the Spectral Theorem Say?"], ''American Mathematical Monthly'', volume 70, number 3 (1963), pages 241–247 [http://www.math.wsu.edu/faculty/watkins/Math502/pdfiles/spectral.pdf Other link]
*{{cite thesis |last=de la Madrid Modino |first= R. |date=2001 |title= Quantum mechanics in rigged Hilbert space language|url=https://scholar.google.com/scholar?oi=bibs&cluster=2442809273695897641&btnI=1&hl=en |degree= PhD |publisher= Universidad de Valladolid}}
* [[Michael C. Reed|M. Reed]] and [[Barry Simon|B. Simon]], ''Methods of Mathematical Physics'', vols I–IV, Academic Press 1972.
* [[Gerald Teschl|G. Teschl]], ''Mathematical Methods in Quantum Mechanics with Applications to Schrödinger Operators'', https://www.mat.univie.ac.at/~gerald/ftp/book-schroe/, American Mathematical Society, 2009.
* {{Cite book |title=Spectral Theory and Quantum Mechanics; Mathematical Foundations of Quantum Theories, Symmetries and Introduction to the Algebraic Formulation 2nd Edition |author= Valter Moretti |author-link= Valter Moretti |publisher= Springer |year=20182017 |url=https://www.springer.com/it/book/9783319707051|isbn=978-3-319-70705-1 }}
 
{{Functional Analysisanalysis}}
{{SpectralTheorySpectral theory}}
 
[[Category:Spectral theory|*]]