**Quantum Computation Preliminaries- Hilbert Spaces (Part 2)**

Here is a LaTeX version of this tutorial, which will be easier to read.

Linear_Algebra.pdf

**(186.05K)**

Number of downloads: 46

In this tutorial, we continue introducing mathematical preliminaries necessary to discuss quantum computation. The goal of this tutorial is to introduce notions from linear algebra, including Hilbert spaces, orthonormal bases and the Gram-Schmidt orthonormalization procedure. This tutorial assumes knowledge of abstract linear algebra at the undergraduate level, such as outlined in the linear algebra tutorials here or in the text of Friedberg, Insel, and Spence. Some familiarity with basic group theory will also be helpful.

Linear Algebra

**Introduction**

Linear algebra is the core mathematical tool for quantum computation. In order to model physical systems like qubits, we use a specific type of vector space known as Hilbert space, which has useful geometric properties. Namely, a Hilbert space is a complete metric space, which is a useful property to have later when we wish to approximate certain operators, like the Quantum Fourier Transform. We will also introduce the notion of an orthonormal basis. Using an orthonormal basis to represent vectors is quite useful in quantum computation, as the inner product of two orthogonal vectors is 0. This property is quite useful in simplifying key calculations in quantum algorithms and their proofs of correctness. The fact that an orthonormal basis consists solely of unit vectors is also desirable, as the basis vectors are all quantum states. We conclude with the Gram-Schmidt orthogonalization procedure, which allows us to construct an orthonormal basis from an arbitrary basis.

**Hilbert Spaces**

**Definition (Hilbert Space)**

A

*Hilbert space*

**H**is a vector space over the field

**F**=

**R**or

**F**=

**C**that has an inner product < . | . > :

**H**x

**H**->

**F**satisfying the following. For every u, v, w in

**H**and any k in

**F**, we have:

- Linearity in the Second Argument: < u|v + kw> = < u|v> + k< u|w >.
- Conjugate Symmetry: < u | v > = < v |u >^{*}.
- Positive Semidefiniteness: < u, u > >= 0, with < u, u > = 0 if and only if u = 0.

**Remark:**For the purpose of quantum computing, attention is restricted to complex Hilbert spaces. In particular, for quantum computation, only finite-dimensional Hilbert spaces are considered. Unless explicitly stated otherwise, we assume all Hilbert spaces are finite dimensional from here on out.

**Lemma 2.1.**Let

**H**be a Hilbert space, and let u, v in

**H**. We have that < u | v > in

**R**.

**Proof:**Let z = < u | v >. By the definition of < . | . >, z is in

**C**, so we may write z = a + bi for some a, b in

**R**. As < . | . > is conjugate symmetric, we have that a + bi = z = z

^{*}= a - bi. Necessarily, b = 0. So z in

**R**. QED.

**Remark:**Lemma 2.1 asserts that the inner product returns a real number, even when working with a complex Hilbert space. The positive semidefiniteness condition then yields that the inner product always returns a non-negative real number. These two conditions are necessary for using the inner product to define a metric.

We now introduce the notions of orthogonality and normality.

**Definition (Orthogonality).**Let

**H**be a Hilbert space, and let u, v in

**H**. We say that u, v are

*orthogonal*if < u|v > = 0.

**Remark:**The inner product < u | v> measures how parallel the vectors u and v are. So if u and v are orthogonal (perpendicular), they share no parallel component. This is the geometric intuition as to why for orthogonal vectors u, v, < u | v > = 0.

**Definition (Norm).**Let V be a vector space over a field

**F**\subset

**C**. A

*norm*is a function || . || : V -> [0, Infinity) satisfying the following properties. For every k in

**F**and every u, v in V, we have:

- Triangle Inequality: ||u + v|| <= ||u|| + ||v||.
- ||ku|| = |k| . ||u||.
- ||u|| = 0 if and only if u = 0.

There are a number of different norms that can be imposed on a vector space. The canonical norm we consider on a Hilbert space

**H**is ||u|| = \sqrt{ < u|u >}. We note that as the inner product of

**H**is positive semidefinite, the norm ||u|| = 0 if and only if u = 0. It is also routine to verify that ||ku|| = |k| . ||u||.

**Lemma 2.2.**Let

**H**be a Hilbert space with the norm ||u|| = \sqrt{ < u|u| > }. For any k in

**C**, we have that ||ku|| = |k| . ||u||.

**Proof:**Observe that:

||ku||

^{2}= < ku | ku >

= k < ku | u >

= k < u | ku >

^{*}

= kk

^{*}< u | u >

^{*}

= |k|

^{2}* < u|u >

= |k|

^{2}* ||u||

^{2}.

Here, the second and fourth lines follow from the fact that the inner product is linear in the second argument. We have that third line follows from the fact that the inner product is conjugate symmetric. Now the fifth line follows from the fact that that kk

^{*}= |k|

^{2}, as well as Lemma 2.1. QED.

The proof that the ||u|| = sqrt(< u | u >) satisfies the triangle inequality is somewhat technical and offers little insight into quantum computation. So we omit it. However, it is worth pointing out that the triangle inequality axiom of a norm is the missing component for constructing a metric on a Hilbert space

**H**. That is,

**H**together with the metric d(u, v) = ||u - v|| forms a metric space. In particular, the distance function d(u, v) = ||u - v|| satisfies the following:

- d(u, v) >= 0, with d(u, v) = 0 if and only if u = v.
- Symmetry: d(u, v) = d(v, u) for all u, v in
**H**. - Triangle Inequality: d(u, v) <= d(u, w) + d(w, v) for all u, v, w in
**H**.

**Orthonormal Bases**

In this section, we introduce the notion of an orthonormal basis, including some basic results and techniques. In quantum computation, it is preferable for several reasons to select orthonormal bases for our Hilbert spaces. Each vector in an orthonormal basis is a unit vector, and therefore a quantum state. Additionally, any two vectors in an orthonormal basis are orthogonal. This latter point is of key importance in quantum computation, as it provides key cancellation in a number of important calculations.

**Definition (Orthonormal Basis).**

Let

**H**be a Hilbert space. We say that { \beta

_{1}, \beta

_{2}, ..., \beta

_{n}} is an

*orthonormal basis*if the following conditions hold:

- For each i in {1, ..., n}, ||\beta
_{i}|| = 1. - For any distinct i, j in {1, ..., n}, < \beta
_{i}| \beta_{j}> = 0.

**Example:**

Let

**H**=

**C**

^{2}with the inner product < u|v > = u

^{*}v. We refer to { (1, 0), (0, 1) } as the

*standard basis*of

**H**. Observe that < (1, 0), (0, 1) > = 1(0) + 0(1) = 0. So the basis vectors are orthogonal. Now ||(1, 0)|| = \sqrt{1

^{2}+ 0

^{2}} = 1. Similarly, ||(0, 1)|| = \sqrt{0

^{2}+ 1

^{2}} = 1. So each basis vector is a unit vector. Thus, { (1, 0), (0, 1) } is an orthonormal basis of

**H**.

**Example:**

Let

**H**be an n-dimensional Hilbert space. For i in {1, 2, ..., n}, let e

_{i}denote the vector where every component is 0 except for the ith component, which is 1. The basis { e

_{1}, e

_{2}, ..., e

_{n}} is the

*standard basis*of

**H**. Observe that ||e

_{i}|| = 1 for all i. Furthermore, for distinct indices i, j, we have that:

< e

_{i}| e

_{j}> = \sum_{h=0}^{n} (e

_{i})

_{h}

^{*}(e

_{j})

_{h}= 1(0) + 0(1) = 0.

We note that e

_{i}= 1 precisely at component i, while e

_{j}= 0 at component i. This yields the term 1(0). By considering component j of e

_{i}and e

_{j}, we obtain the term 0(1). So < e

_{i}| e

_{j}> = 0. We conclude that the standard basis is orthonormal. The standard basis is a helpful example to keep in mind when considering orthonormal bases. It is in fact, considered the canonical example of an orthonormal basis. Note that this example generalizes the previous example.

In the next section, we will discuss in greater detail how to construct an orthonormal basis from an arbitrary basis. We conclude this section by examining how orthogonal sets of vectors, and orthonormal bases in particular, come in to play. Recall that for the Hilbert space

**H**=

**C**

^{n}, we stated that the inner product:

< u | v > = \sum_{i=1}

^{n}u

_{i}

^{*}v

_{i}.

The above equation actually follows from the definition of a Hilbert space. We introduce the Kronecker delta function to make the calculations clearer.

**Definition (Kronecker Delta Function).**Let { v

_{1}, ..., v

_{n}} be a set of orthogonal vectors in a Hilbert space

**H**. The

*Kronecker delta function*

delta

_{ij}= 1 if i = j,

delta

_{ij}= 0 if i != j.

In particular, if v

_{1}, ..., v

_{n}are unit vectors, then we have that: < v

_{i}| v

_{j}> = delta

_{ij}.

**Lemma 2.3.**Let

**H**be an n-dimensional Hilbert space, and let \beta = { \beta

_{1}, ..., \beta

_{n}} be an orthonormal basis for

**H**. Then for any u, v in

**H**:

< u | v > = \sum_{i=1}

^{n}u

_{i}

^{*}v

_{i}.

**Proof:**As \beta is a basis for

**H**, we may write: u = \sum_{i=1}^{n} u

_{i}\beta

_{i}and v = \sum_{i=1}

^{n}v

_{i}\beta

_{i}, where u

_{1}, ..., u

_{n}, v

_{1}, ..., v

_{n}in

**C**. We have that:

< u|v > &= < v | u >

^{*}

= < v | \sum_{i=1}^{n} u

_{i}\beta

_{i}>

^{*}

= \sum_{i=1}^{n} u

_{i}

^{*}< v |\beta

_{i}>

^{*}

= \sum_{i=1}^{n} u

_{i}

^{*}< \beta

_{i}| \sum_{j=1}^{n} v

_{j}\beta

_{j}>

= \sum_{i=1}^{n} u

_{i}

^{*}\sum_{j=1}^{n} v

_{j}< \beta

_{i}| \beta

_{j}>

= \sum_{i=1}^{n} u

_{i}

^{*}\sum_{j=1}^{n} v

_{j}delta

_{ij}

= \sum_{i=1}^{n} u

_{i}

^{*}v

_{i}.

We note that the first and and fourth lines follow from the conjugate symmetry of the inner product. Now third and fifth lines follow from the fact that the inner product is linear in the second argument. As \beta is an orthonormal basis, we have that < \beta

_{i}| \beta

_{j}> = delta

_{ij}, which yields the sixth line. Finally, the last line follows from the fact that delta

_{ij}= 0 whenever i != j and delta

_{ii}= 1 for all i in {1, ..., n}. QED.

It is also helpful to note that any orthogonal set of non-zero vectors is linearly independent. The proof is similar as above.

**Lemma 2.4.**Let \beta = { \beta

_{1}, ..., \beta

_{n}} be an orthogonal set of non-zero vectors in a Hilbert space

**H**. Then \beta is linearly independent.

**Proof:**Let v = \sum_{i=1}^{n} c

_{i}\beta

_{i}= 0. We have that:

< v | v > = \left < v \biggr | \sum_{i=1}

^{n}c

_{i}\beta

_{i}>

= \sum_{i=1}^{n} c

_{i}< v | \beta

_{i}>

= \sum_{i=1}^{n} c

_{i}< \beta

_{i}| \sum_{j=1}^{n} c

_{j}\beta

_{j}>

^{*}

= \sum_{i=1}^{n} c

_{i}\sum_{j=1}^{n} c

_{j}

^{*}< \beta

_{i}| \beta

_{j}>

= \sum_{i=1}^{n} c

_{i}c

_{i}

^{*}< \beta

_{i}| \beta

_{i}>.

Here, the second and fourth lines follow from the fact that the inner product is linear in the second argument. Now the third line follows from the fact that the inner product is conjugate symmetric. As the vectors in \beta are orthogonal, < \beta

_{i}| \beta

_{j}> = 0 whenever i != j, which yields the last line.

By assumption the vectors of \beta are non-zero. So for all i in {1, ..., n}, < \beta

_{i}| \beta

_{i}> != 0. Note as well that c

_{i}c

_{i}

^{*}= |c

_{i}|

^{2}= 0 if and only if c

_{i}= 0. As v = 0, it follows that each c

_{i}= 0. So \beta is linearly independent.

QED.

**Gram-Schmidt Orthogonalization**

In this section, we examine the Gram-Schmidt procedure to construct an orthonormal basis from an arbitrary basis. The only requirement is that our vector space has an inner product, such as a Hilbert space. The Gram-Schmidt procedure takes as input a basis \beta = { \beta

_{1}, ..., \beta

_{n}}, and returns an orthonormal basis y = { y

_{1}, ..., y

_{n}} for the same space as \beta. The Gram-Schmidt algorithm proceeds iteratively. We begin by setting:

x

_{1}:= \beta

_{1}

y

_{1}:= x

_{1}/ ||x

_{1}||.

That is, y

_{1}is just a normalized \beta

_{1}. As 1/||x

_{1}|| is a constant, y

_{1}is a scalar multiple of \beta

_{1}. So y

_{1}and \beta

_{1}span the same space (i.e., generate the same set of vectors). Now for i > 1, we set:

x

_{i}:= \beta

_{i}- \sum_{k=1}^{i-1} < y

_{k}|\beta

_{i}> y

_{k}

y

_{i}:= x

_{i}/ ||x

_{i}||.

We similarly note that y

_{i}is a unit vector and a scalar multiple of x

_{i}. So x

_{i}and y

_{i}span the same space. In particular, we have that span({ x

_{1}, ..., x

_{n}}) = span({y

_{1}, ..., y

_{n}}). Now the intuition for the right-hand side of x

_{i}:= \beta

_{i}- \sum_{k=1}^{i-1} < y

_{k}|\beta

_{i}> y

_{k}is that we start with \beta

_{i}and subtract out the components of y

_{1}, ..., y_{i-1} that are not orthogonal with \beta

_{i}.

Before offering a proof of correctness for the Gram-Schmidt procedure, we first work through an example.

**Example:**Let \beta = { (1, 3), (-1, 2) } be a basis of

**R**

^{2}. We construct an orthonormal basis y from \beta.

- We have that \beta
_{1}= (1, 3). So ||\beta_{1}|| = \sqrt{1^{2}+ 3^{2}} = sqrt{10}. Thus,

y_{1}= \beta_{1}/ ||\beta_{1}|| = ( 1/sqrt(10), 3/sqrt(10) ).

- Now x
_{2}= \beta_{2}- < y_{1}| \beta_{2}> y_{1}. We note that:

< y_{1}| \beta_{2}> = -1/sqrt(10) + 6/sqrt(10) = 5/sqrt(10).

So < y_{1}| \beta_{2}> y_{1}= 5 ( 1/10, 3/10 ). Thus, x_{2}= (-1, 2) - ( 5/10, 15/10 ) = ( -3/2, 1/2 ). We now normalize x_{2}to obtain y_{2}. That is:

y_{2}= x_{2}/ ||x_{2}|| = ( -3/sqrt(10), 1/sqrt(10) ).

So y = {y

_{1}, y

_{2}} is our orthonormal basis constructed from \beta.

**Theorem 3.1.**Let \beta = { \beta

_{1}, ..., \beta

_{n}} be a set of linearly independent vectors. The set of vectors produced by the Gram-Schmidt procedure y = { y

_{1}, ..., y

_{n}} is orthonormal and spans the same space as \beta.

**Proof:**The proof is by induction on i in {1, ..., n}. We have the base case of i = 1. So y

_{1}= \beta

_{1}/ ||\beta

_{1}||. As \beta is a set of linearly independent vectors, \beta != 0. So ||\beta|| > 0, and y

_{1}is well-defined. Observe that y

_{1}is a scalar multiple of \beta

_{1}. So span({ y

_{1}}) = span({\beta

_{1}}). By construction, y

_{1}is a unit vector. So the theorem holds. Now fix 1 <= k < n, and suppose the theorem holds. We prove true for the k+1 case.

By the inductive hypothesis, we have that { y

_{1}, ..., y

_{k}} is an orthonormal set that spans the same space as { \beta

_{1}, ..., \beta

_{k}}. As \beta is linearly independent, \beta

_{k+1}\not in span({ \beta

_{1}, ..., \beta

_{k}}) = span({ y

_{1}, ..., y

_{k}}). Note that < y

_{j}| \beta

_{k+1}> is a constant for each j in {1, ..., k}. Thus,

\sum_{j=1}^{k} < y

_{j}| \beta

_{k}> y

_{j}in span({ y

_{1}, ..., y

_{k}}).

As \beta

_{k+1}is not in span({y

_{1}, ..., y

_{k}}), it follows that \beta

_{k+1}is not equal to the above expression. So:

x

_{k+1}= \beta

_{k+1}- \sum_{j=1}^{k} < y

_{j}| \beta

_{k+1}> y

_{j}!= 0.

Thus, y

_{k+1}= x

_{k+1}/ ||x

_{k+1}|| is well-defined and a unit vector. It remains to show the following:

- (a) { y
_{1}, ..., y_{k+1}} is an orthogonal set of vectors. - (b) span({ \beta
_{1}, ..., \beta_{k+1}}) = span({ y_{1}, ..., y_{k+1}}).

**Proof of (a).**We first show that {y

_{1}, ..., y

_{k+1}} is an orthogonal set of vectors. By the inductive hypothesis, we have that { y

_{1}, ..., y

_{k}} is an orthogonal set of vectors. So it suffices to show that for all i in {1, ..., k}, < y

_{i}| y

_{k+1}> = 0. Fix such an i in {1, ..., k}. We have that:

< y

_{i}| y

_{k+1}> = 1/||x

_{k+1}|| * < y

_{i}| x

_{k+1}>

= 1/||x

_{k+1}|| * < y

_{i}\biggr| \beta

_{k+1}- \sum_{j=1}^{k} < y

_{j}| \beta

_{k+1}> y

_{j}>

= 1/||x

_{k+1}|| * ( < y

_{i}| \beta

_{k+1}> - \sum_{j=1}^{k} < y

_{j}| \beta

_{k+1}> < y

_{i}| y

_{j}> )

Here, the second line follows from the definition of x

_{k+1}and the third line follows from the fact that the inner product is conjugate linear in the second argument. Now as { y

_{1}, ..., y

_{k}} is an orthonormal set, we have that for distinct i, j in {1, ..., k}, < y

_{i}| y

_{j}> = 0. Thus, the third line simplifies to:

1/||x

_{k+1}|| * ( < y

_{i}| \beta

_{k+1}> - < y

_{i}| \beta

_{k+1}> ) = 0.

So {y

_{1}, ..., y

_{k+1}} is an orthogonal set of vectors, as desired.

**Proof of (b).**We now show that span({ \beta

_{1}, ..., \beta

_{k+1}}) = span({ y

_{1}, ..., y

_{k+1}}). By the inductive hypothesis, we have that span({ \beta

_{1}, ..., \beta

_{k}}) = span({ y

_{1}, ..., y

_{k}}). We first show that \beta

_{k+1}in span({ y

_{1}, ..., y

_{k+1}}). Recall that:

y

_{k+1}= 1/||x

_{k+1}|| * ( \beta

_{k+1}- \sum_{i=1}^{k} < y

_{i}|\beta

_{k+1}> y

_{i}).

Solving for \beta

_{k+1}, we obtain:

\beta

_{k+1}= ||x

_{k+1}|| * y

_{k+1}+ \sum_{i=1}^{k} < y

_{i}|\beta

_{k+1}> y

_{i}.

So \beta

_{k+1}in span({y

_{1}, ..., y

_{k+1}}), and we have that span({ \beta

_{1}, ..., \beta

_{k+1}}) \subset span({y

_{1}, ..., y

_{k+1}}).

Now by assumption, {\beta

_{1}, ..., \beta

_{k+1}} is a set of k+1 linearly independent vectors. As {y

_{1}, ..., y

_{k+1}} is an orthogonal set of vectors, {y

_{1}, ..., y

_{k+1}} is linearly independent. Thus, as span({ \beta

_{1}, ..., \beta

_{k+1}}) \subset span({y

_{1}, ..., y

_{k+1}}) and |{\beta

_{1}, ..., \beta

_{k+1}}| = |{y

_{1}, ..., y

_{k+1}}| = k+1, we in fact have span({ \beta

_{1}, ..., \beta

_{k+1}}) = span({y

_{1}, ..., y

_{k+1}}). This completes the proof of part (b), as well as the main theorem.

QED.

**Conclusion**

In this tutorial, we introduced the setting for quantum computation- the Hilbert space. We also examined the notion of an orthonormal basis, as well as the Gram-Schmidt procedure for constructing orthonormal bases. In the next tutorial, we will examine Hermitian and unitary operators, projectors, and tensor products, which provide the remaining linear algebraic tools needed to discuss quantum computation.

This post has been edited by **macosxnerd101**: 07 January 2019 - 11:57 PM