0 Replies - 7067 Views - Last Post: 08 January 2019 - 03:29 PM Rate Topic: -----

#1 macosxnerd101   User is offline

  • Games, Graphs, and Auctions
  • member icon




Reputation: 12634
  • View blog
  • Posts: 45,799
  • Joined: 27-December 08

Quantum Computation Preliminaries- More Linear Algebra (Part 3)

Posted 08 January 2019 - 03:29 PM

Quantum Computation Preliminaries- More Linear Algebra (Part 3)

Here is a LaTeX version of this tutorial, which will be easier to read.
Attached File  Linear_Algebra_Part3.pdf (200.17K)
Number of downloads: 46

In this tutorial, we continue introducing mathematical preliminaries necessary to discuss quantum computation. The goal of this tutorial is to introduce additional notions from linear algebra, including unitary and Hermitean operators, projectors, and tensor products. This tutorial assumes knowledge of abstract linear algebra at the undergraduate level, such as outlined in the linear algebra tutorials here or in the text of Friedberg, Insel, and Spence, as well as the previous tutorial. Some familiarity with basic group theory will also be helpful.


Linear Algebra

Introduction

In the last Linear Algebra tutorial, we introduced the setting for quantum computation; namely, Hilbert spaces. Here, we introduce key tools necessary for implementing quantum computation. Recall from the previous tutorial that quantum states are represented as unit vectors on the three dimensional unit sphere S3. Any operation that evolves the quantum system must map quantum states to other quantum states. The group of unitary operators precisely satisfies this condition. Quantum computation is also inherently probabilistic; physically, quantum states exist in a superposition of pure states. In order to determine the quantum state, we must measure it. However, any measurement interacts with and affects the quantum system, forcing a pure state. Quantum measurement operators are implemented as projectors, where each outcome has a corresponding projector. The projector operators are, in particular, Hermitean. We conclude with Dirac's bra-ket notation, as well as tensor products. Quantum systems with multiple qubits are modeled using tensor products of the underlying Hilbert spaces.

Unitary and Hermitean Operators

We begin with a couple definitions.

Definition (Adjoint.) Let A be an m x n matrix over C. The adjoint of A, denoted A*, is the m x n matrix obtained by taking the transpose of A and then taking the complex conjugate of each entry. So:

[A*]ij = [A_{ji}]*.


Remark: Recall that the * operator is used to denote the conjugate of a complex number. It is helpful to think of a complex number z as a 1 x 1 matrix. So z* is obtained by taking the transpose of z and then taking the complex conjugate. So the notation for the adjoint of A, A*, is a generalization of taking the conjugate of a complex number rather than an abuse of notation.

Definition. Let H and J be Hilbert spaces. The set of linear operators from H \to J is denoted L(H, J). The set L(H, H) is denoted L(H).

Remark: We note that L(H, J) is in fact a Hilbert space over C. Recall that linear operators over finite dimensional vector spaces can be represented as matrices. That is, if \text{dim}(H) = n, \text{dim}(J) = m, and T \in L(H, J), then T can be represented as an m x n matrix. Addition and scalar multiplication of the matrices is componentwise. So \text{dim}( L(H, J) ) = mn. For A, B \in L(H, J), the inner product < A, B > = \text{Tr}(A*B), where \text{Tr} is the trace operator.

The adjoint satisfies several properties for every A, B \in L(H, J) and every k \in C:
  • (A*)*.
  • (A+ kB)* = A* + k*B*
  • (AB)* = B*A*.


There is an equivalent formulation of the adjoint, that is particularly useful in quantum computation. Let H and J be Hilbert spaces. For every T \in L(H, J), there exists a unique T* \in L(J, H) satisfying < v | Tu> = < T*v | u> for all u \in H and all v \in J. Again, we refer to T* as the adjoint. While we omit a proof that these definitions are equivalent, we can provide some intuition. We note that:

< v | Tu> = v*(Tu) = (v*T)u.


Now by our properties of adjoints, we note that (v*T) = (T*v)*. So:

(v*T)u = (T*v)*u = < T*v | u >.


With the notion of the adjoint in tow, we can now define unitary and Hermitean operators.

Definition (Unitary Operator). Let A be an n x n matrix over C. We say that A is \textit{unitary} if AA* = I. That is, A-1 = A*.

Remark: The above definition immediately implies that unitary operators are closed under inverses. That is, (A*)-1 = A. Now we note that I* = I, so the identity matrix is unitary. We next observe that the product of two unitary operators is unitary. Suppose that A and B are unitary operators of dimension n. We note that:

(AB)(AB)* = (AB)B*A* = A(BB*)A* = AA* = I.


So (AB)* = (AB)-1, and we conclude that AB is unitary. As matrix multiplication is associative, we have that the unitary operators of dimension n form a group, which we denote U(n). The subgroup \text{SU}(n) \leq U(n) of unitary matrices with determinant 1 is referred to as the \textit{special unitary group}. \text{SU}(2) of particular interest for quantum computation, as it provides a linear representation of the unit quaternions.

Definition (Hermitean Operator). Let A be an n x n matrix over C. We say that A is \textit{Hermitean} if A = A*.

Remark: We note that if A is Hermitean, then so is A*. Namely, (A*)* = A* = A.

Example: The identity matrix I is both Hermitean and unitary.

Example: The Pauli spin matrices are also Hermitean and unitary. The three Pauli spin matrices are given by:
Attached Image

The Pauli spin matrices, together with the identity matrix, provide a linear representation of the quaternions. They are also key tools in designing quantum algorithms using quantum circuits. We will explore these notions more in later tutorials.

Example: The matrix 2I is Hermitean, but not unitary. Observe that (2I)* = 2*I* = 2I, so 2I is Hermitean. However, (2I)^{2} = 4I, so (2I) is not unitary.

Example: Consider the following matrix:
Attached Image

Observe that:
Attached Image

So A is unitary. We also note that A \neq A*, so A is not Hermitean.

We conclude this section by examining some basic properties of Hermitean and unitary operators.

Lemma 2.1. Let H be a Hilbert space, and let A \in L(H) be Hermitean. We have that < Av | u > = < u | Av >, for all u, v \in H.

Proof. Fix u, v \in H. By the definition of the adjoint, we have that < u | Av > = < A*u | v >. As A is Hermitean, A = A*. So < A*u | v > = < Au | v >, as desired. QED.

Lemma 2.2. Let A be a Hermitean operator, and let k \in R. Then kA is Hermitean.

Proof. As k \in R, k = k*. So (kA)* = k*A* = kA* = kA, where the last equality follows from the fact that A is Hermitean. QED.

We next show that unitary operators preserve the inner product. This is particularly important, as it implies that unitary operators act on the three-dimensional unit sphere S3 by rotating it.

Lemma 2.3. Let H be a Hilbert space, and let A \in L(H) be unitary. For any u, v \in H, we have that < Au | Av > = < u | v >.

Proof. Fix u, v \in H. By definition of the adjoint, we have that < Au | Av > = < AA*u | v >. As A is unitary, AA* = 1. So < AA* u | v > = < u | v >, as desired. QED.

Remark: Let H be a Hilbert space, and let u \in H. Let A \in L(H) be unitary. By Lemma 2.3, ||Au||2 = < Au | Au > = < u | u > = ||u||2. In particular, if u is a unit vector (i.e., a quantum state), A maps u to another unit vector. So any unitary operator acts on S3 by rotating the sphere. For this reason, we assume that quantum systems evolve unitarily. A consequence of this is that quantum computation is reversible. Namely, we can apply the inverse of any unitary operator to undo a given transformation. As a result, the geometry of S3 is closely related to quantum computation.


Projectors
In order leverage a quantum system, we need a way to obtain information about its state. Quantum mechanics puts severe limitations on the amount of information that can be extracted and the manner in which it may be extracted. In particular, information may not be retrieved in a passive manner. In order to obtain information about the system, it is necessary to make an observation or measurement. A standard quantum measurement is a projective measurement, which includes a set of possible outcomes and a projection operator (or projector) associated with each outcome. In this section, we introduce the notion of a projector and examine a number of relevant properties.

Definition (Projector). Let H be a Hilbert space. An orthogonal projection operator or projector on H is a linear operator P \in L(H) satisfying:
  • P is Hermitean; that is, P = P*, and:
  • P2 = P; that is, P is idempotent.


Remark: There are a number of projectors for a given Hilbert space H. The identity map I and the zero-map 0 (which sends every element to the 0 vector) are the two trivial projectors. Note as well that for any unit vector x \in H (which we view as a column vector), P_{x} := xx* is the orthogonal projector onto the span of x. This fact will be proven later. There are numerous other projectors, as well. We note that the subspaces of H are in bijection with the projectors of H.

We prove a number of useful facts about projects.

Lemma 2.4. Let H be a Hilbert space. An operator P \in L(H) is a projector if and only if P = P*P.

Proof. Suppose first that P is a projector. So P2 = P. As P is a projector, P = P*. So P2 = P*P = P, as desired. Conversely, suppose P = P*P. We first show that P is Hermitean. Observe that P* = (P*P)* = P*P = P, as desired. Next, we show that P is idempotent. By assumption, P = P*P. As P is Hermitean, P* = P. So P = P2. Thus, P is a projector.

The next lemma provides that if two projectors P and Q are orthogonal (i.e., the subspaces corresponding to P and Q respectively are orthogonal), then P + Q is also a projector. This is particularly useful, as it allows us to construct new projectors from existing projectors.

Lemma 2.5. Let H be a Hilbert space, and let P, Q \in L(H) be projectors such that PQ = 0. Then QP = 0 and P + Q is a projector.

Proof. We first show that PQ = 0. We note that 0* = 0 = (PQ)* = Q*P* = QP, where the last equality follows from the fact that Q and P are projectors. We now show that P+Q is a projector. As P, Q are Hermitean, so is P + Q. It suffices to show that P+Q is idempotent. We note that:

(P+Q)2 = P2 + PQ + QP + Q2
= P2 + Q2
= P + Q.


Here, the second line follows from the fact that PQ = QP = 0. Now the third line follows from the fact that P, Q are projectors, so P2 = P and Q2 = Q. We conclude that P is a projector, as desired. QED.

We next establish the bijection between projectors and subspaces of a Hilbert space. This proof is somewhat involved, but it provides insight into key properties. Namely, if P is a projector corresponding to the subspace W, then Tr(P) = dim(W). Before doing so, we first introduce a helper lemma. Namely, we verify that projectors indeed do project orthogonally.


Lemma 2.6. Let H be a Hilbert space, and let P \in L(H) be a projector. We have that Pu = u for every u \in Im(P). Furthermore, P projects orthogonally onto Im(P); that is, for any v \not \in Im(P), < u | Pv - v > = 0.

Proof. We note that Im(P) = { Pv : v \in H }. Let u \in Im(P). So u = Pv for some v \in H. We have that Pu = P(Pv) = P2v = Pv. As P is a projector, we have that P2 = P, which yields the last equality. As Pv = u by assumption, we have that Pu = u.

We next verify that P projects orthogonally onto Im(P). That is, for any w \not \in Im(P), Pv = 0. Fix such a w \not \in Im(P), and let u \in Im(P) be arbitrary. We show that < u | Pw - w > = 0. As u \in Im(P), we may write u = Px for some x \in H. So:

< u | Pw - w > = < Px | Pw - w >
= < Px | Pw > - < Px | w >
= < P*Px | w > - < Px | w >
= < Px | w > - < Px | w >
= 0.


Here, the second line follows from the fact that the inner product is linear in the second argument. We note that the third line follows from the definition of the adjoint, and the fourth line follows from the fact that P is a projector (so P*P = P2 = P). The result follows. QED.

Theorem 2.1. Let H be a Hilbert space. Let S by the set of subspaces of H, and let P be the set of projectors corresponding to H. Let \varphi : P \to S be given by \varphi(P) = Im(P). We have that \varphi is a bijection.

Proof: We recall that for any P \in L(H), Im(P) is a subspace of H. So \varphi is well-defined. We first show that \varphi is injective. Fix projectors P, Q and suppose P and Q both project onto the subspace V \leq H. So for any u, v \in H, we have the following:

< Pu | Pv > = < P*Pu | v > = < Pu | v >, and
< Pu | Qv > = < Q*Pu | v > = < QPu | v > = < Pu | v >.


The last equality follows from Lemma 2.6, which provides that Q fixes every element in V. So in particular, Q fixes Pu. Thus:

< Pu | Pv - Qv > = < Pu |Pv > - < Pu | Qv > = < Pu | v > - < Pu | v > = 0.


In particular, it follows that: ||Pv - Qv||2 = < Pv - Qv | Pv - Qv > = 0, which implies that Pv - Qv = 0. As v was arbitrary, it follows that P = Q. So we have that \varphi is indeed injective.

We now show that \varphi is surjective. Let W is a subspace of H be an arbitrary subspace. Let { b1, ..., b_{k}} be an orthonormal basis for W, and let \beta = { b1, ..., bn} be the extension of { b1, ..., b_{k} } to an orthonormal basis for H. We construct a projector P \in L(H) which projects onto W. As H is finite-dimensional, we can represent P as a matrix with respect to \beta. We set:
  • Pii = 1 for i \in {1, ..., k}.
  • Pii = 0 for i \in {k+1, ..., n}.
  • Pij = 0 otherwise.


Observe that P = P* and P2 = P. Furthermore, note that P fixes the basis vectors of W and annihilates the remaining basis vectors of H. So Pv = v for all v \in W. It follows that P projects orthogonally onto W. So \varphi is surjective. QED.

Remark: Theorem 2.1 has a nice corollary. Namely, if P projects onto the subspace V, then Tr(P) = dim(V). To see this, we take the projector P constructed in the proof of Theorem 2.1. The only non-zero entries are the 1's along the diagonal which correspond to the basis vectors of V. We also note that the trace and determinant of a matrix are preserved under conjugation. So Tr(P) is independent of our choice of basis.

We conclude this section by proving that for any unit vector x, xx* is a projector.

Lemma 2.7. Let x be a unit vector. Then xx* is a projector.

Proof. Observe that (xx*)* = (x*)*x* = xx*. So xx* is Hermitean. We now show that (xx*)2 = xx*. We have that:

(xx*)2 = xx*(xx*)
= x(x*x)x*
= xx*
= xx*.


Here, (the second line follows from the fact that matrix multiplication is associative. Now as x is a unit vector, < x|x > = x*x = 1, which yields the third line. The result follows. QED.



Dirac Notation and Tensor Products


We begin this section by briefly introducing Dirac notation, which is commonly used in quantum computation and by the physics community. The real advantage of Dirac notation is providing a visually compact description of quantum systems involving multiple qubits. Suppose we have a Hilbert space H. Let u, v \in H, and recall that < u | v > = u*v. Here, we refer to < u| = u* as a bra vector, and |v > as a ket vector. The inner product < u | v > is often referred to as a bracket ("bra-ket").

Remark: As a word of caution, vectors in Dirac notation are usually represented using labels. Often times, these labels are numbers. We emphasize that in such cases, the numbers have little mathematical significance and are just names. We illustrate with the following example.

Example: Let H = C2. Let |0> := e1 = (1, 0) be the first standard basis vector, and |1> := e2 = (0, 1) be the second standard basis vector. In the vectors |0>, |1>, the 0 and 1 are simply labels and do not have any mathematical significance. We also remark that the standard basis { |0>, |1>} is referred to as the computational basis in quantum computation.

We next introduce tensor products. The motivation to study tensor products is as follows. Recall that quantum systems are represented using Hilbert spaces. Suppose we have two quantum systems S and T, modeled by H_{S} and H_{T}. In order to consider these two systems together as a single system, we need to construct a new Hilbert space from H_{S} and H_{T}. The particular construction is referred to as a tensor product, which is also known as the outer product or Kronecker product.

Definition (Tensor Product). Let A be an m x n matrix, and let B be an r x s matrix. The \textit{tensor product} A \otimes B is an mr x ns matrix given in block form by:
Attached Image

Example. Consider the Pauli spin matrices X and Y. We have that:
Attached Image

Similarly, we have that:
Attached Image

In particular, observe that X \otimes Y != Y \otimes X. So the tensor product is not commutative.

Remark: Let |a>, |b> be vectors. We note that the tensor product |a > \otimes b > is frequently written as |a> |b> or |a, b>. Similarly, we may write the tensor product of two bra vectors as < a| < b| or < a, b| when the meaning is clear. Note that this shorthand notation only applies when using Dirac notation.

We next examine some of the algebraic properties of the tensor product. First, observe that if a, b \in C, a \otimes b = ab. Here, we treat a, b as 1 x 1 matrices. More generally, suppose that x is an m x 1 (column) vector and y is a 1 x n (row) vector. Then x \otimes y = xy. Both of these observations follow immediately by applying the definition of the tensor product. The tensor also satisfies a number of algebraic properties listed below, which we will not prove. Let A, B, C, D be matrices, and let a, b \in C. Provided the operations involved are well-defined, we have:
  • The tensor product is associative. That is, (A \otimes B) \otimes C = A \otimes (B \otimes C).
  • (A \otimes B)(C \otimes D) = (AC \otimes BD).
  • (A \otimes B)* = A* \otimes B*.
  • Tr(A \otimes B) = Tr(A) \cdot Tr(B).


We will, however, prove that the tensor product is bilinear; that is, the tensor product is linear in both arguments.

Lemma 2.8. Let A, B, C be matrices, and let k \in C. Then A \otimes (B + kC) = A \otimes B + k(A \otimes C) and (A + kB) \otimes C = A \otimes C + k(B \otimes C).

Proof. Consider the matrix A \otimes (B + kC) in block form. Observe that the ij block of A \otimes (B + kC) is aij(B + kC) = aijB + k(aijC). Therefore, we may write A \otimes (B + kC) = X + Y, where the ij block of X is aijB, and the ij block of Y is k(aijC). So X = A \otimes B and Y = k(A \otimes C), as desired.

Similarly, consider (A + kB) \otimes C in block form. We note that the ij block of (A + kB) \otimes C is (aij + kbij)C = aijC + kbijC. Therefore, we may write (A + kB) \otimes C = X + Y, where the ij block of X is aijC, and the ij block of Y is kbijC. So X = A \otimes C and Y = k(B \otimes C), as desired. QED.

Our next goal is to make some sense of what it means to take the tensor product of two vector spaces. We begin with a motivating example. Suppose that x = (x1, ..., xm) \in Cm, y = (y1, ..., yn) \in Cn are column vectors. We have that:
Attached Image


So x \otimes y is a column vector in Cmn. This suggests that Cm \otimes Cn \cong Cmn. This is in fact the case. In order to prove this, we show that if \beta = { \beta1, ..., \betam} is an orthonormal basis of C^{m} and \gamma = { \gamma1, ... , \gamman} is an orthonormal basis of C^{n}, then:

\beta \otimes \gamma = { \betai \otimes \gammaj : 1 <= i <= m, 1 <= j <= n }


is an orthonormal basis of C^{mn}. We begin with the following lemma.


Lemma 2.9. Let H, J be Hilbert spaces. Let u, w \in H, and let v, x \in J. Then: < u \otimes v | w \otimes x > = < u|w > < v|x >.

Proof. By our properties of tensor products, we have that:

< u \otimes v | w \otimes x > = (u \otimes v)*(w \otimes x)
= (u*w) \otimes (v*x).


We note that u*w = < u|w > \in C. Similarly, v*x = < v | x > \in C. The result follows. QED.

Corollary 2.0.1. Let \beta = { \beta1 ,..., \betam} be an orthonormal basis of C^{m}, and let \gamma = { \gamma1, ..., \gamman} be an orthonormal basis of C^{n}, then:

\beta \otimes \gamma = { \betai \otimes \gammaj : 1 \leq i \leq m, 1 \leq j \leq n }


is an orthonormal basis of C^{mn}.

Proof: Let a, b \in {1, ..., m} and c, d \in { 1, ..., m}. By Lemma 2.9, we have that:

< \beta_{a} \otimes \beta_{b} | \gamma_{c} \otimes \gamma_{d} > = < \beta_{a} | \beta_{b} > < \gamma_{c} | \gamma_{d} >.


As \beta, \gamma are orthonormal bases, < \beta_{a} | \beta_{b} > < \gamma_{c} | \gamma_{d} > = 1 if and only if a = b and c = d, and < \beta_{a} | \beta_{b} > < \gamma_{c} | \gamma_{d} > = 0 otherwise. We note that |\beta \otimes \gamma| = mn. So \beta \otimes \gamma forms a basis of C^{mn}. QED.

So for vector spaces V and W, we define V \otimes W = { v \otimes w : v \in V, w \in W }. Notice that this definition is independent of the choice of basis. If we want to compute explicitly the tensor product of two vectors or or matrices, then it is necessary for us to identify a choice of basis. We also remark, in light of Corollary 2.0.1, that if H = C^{m} and J = C^{n} are Hilbert spaces, then H \otimes J is isomorphic to C^{mn}.


Conclusion

In this tutorial, we introduced the remaining tools from Linear Algebra necessary to discuss quantum computation. In the next tutorial, we will finish introducing the preliminaries for quantum computation, including the ring of quaternions and their representations, rotational symmetries of the three-dimensional unit sphere S3, and the relationships between these objects.

Is This A Good Question/Topic? 0
  • +

Page 1 of 1