Why you may not want to read this page
Incomplete, brief, and evolving "lookup table" on quantum mechanics for personal use, closely following Nielsen and Chuang (2010) and others. There may be typos (even conceptual errors!). If you find one, please tell me about it (argphy@gmail.com). Also, feel free to use the material for your own use.
Let $\mathcal{V}$ be a set of elements (vectors) with two operations, addition and multiplication with a scalar from a field $\mathbb{F}$ (typically $\mathbb{C}$ in QM). $\mathcal{V}$ is a called vector space if for vectors $\ket{v},\ket{w},\ket{x} \in \mathcal{V}$ and scalars $c,d \in \mathbb{F}$, the following ten conditions are satisfied:
Examples of vector spaces include $\mathbb{R}^n$, $\mathbb{C}^n$, $\mathcal{M}_{m,n}(\mathbb{F})$ (space of $m\times n$ matrices with elements from the field $\mathbb{F}$). One important point is when it is said that $\mathcal{V}$ is a vector space over a field $\mathbb{F}$, it means that the scalars used in the scalar multiplication comes from $\mathbb{F}$ (also known as a ground field), not that the elements of the vectors are from $\mathbb{F}$. For example Hermitian matrices can be defined over $\mathbb{R}$ or $\mathbb{C}$, though its elements are from $\mathbb{C}$. This is important since, for example, the space of $n\times n$ Hermitian matrices $\mathcal{M}^{\rm H}_n$ is not a vector space over $\mathbb{C}$, but it is a vector space over $\mathbb{R}$. Example: $A=\begin{pmatrix} 0 1 \\ 1 0 \end{pmatrix} \in \mathcal{M}^{\rm H}_2$ but since $(iA)^\dagger\neq iA$, it's not Hermitian, so $\mathcal{M}^{\rm H}_2$ isn't closed under scalar multiplication over $\mathbb{C}$.
If $\mathcal{V}$ is a vector space and $\mathcal{S} \subseteq \mathcal{V}$, then $\mathcal{S}$ is a subspace iff it is closed under addition and scalar multiplication. (You need not check all ten properties. See p. 9, Johnston (2021) for a proof).
Example: the set of non-invertible $2 \times 2$ matrices is *not* a subspace of $\mathcal{M}_2$. $\begin{smallmatrix}1 0 \\ 0 0\end{smallmatrix}$ and $\begin{smallmatrix} 0 0 \\ 0 1\end{smallmatrix}$ are non-invertible but their sum $\begin{smallmatrix}1 0 \\ 0 1\end{smallmatrix}$ is invertible.
Any vector of the form $\sum^k_{i=1} c_i \ket{v_i}$ is called a linear combination. Interesting: The Identity matrix *cannot* be written as a linear combination of Pauli matrices.
All *finite* linear combinations of vectors of $\mathcal{B} \subseteq \mathcal{V}$. $\rm{span}(\mathcal{B})$ is a subspace of $\mathcal{V}$. If $\rm{span}(\mathcal{B})=\mathcal{V}$ then $\mathcal{V}$ is said to be spanned by $\mathcal{B}$. For example, a vector spans a line and two non-parallel vectors spans $\mathbb{R}^2$.
A set of vectors $\mathcal{B} \subseteq \mathcal{V}$ is linearly dependent if there exists a set of scalars $c_1,c_2,\cdots,c_k$, not all zeros, such that $c_i\ket{v}*i=\ket{0}$. $\mathcal{B}$ is linearly independent if it is not linearly dependent. (It's funny 😄, but see the example for what is means.)
Example: Two non-parallel vectors on a plane are linearly independent, three of more of them on a plane are linearly dependent. Notice that an infinite set of vectors can be linearly independent—you only need to show that there are not any finite linear combination of them such that all the scalars used in the combination are zeros. Think about the set $\{1,x,x^2,\cdots\}$. Though the set contains infinitely many elements, it is linearly independent. Because if $\sum_{i=0}^p c_i x^i=0$ then all $c_i$s are equal to 0. (set $x=0 \implies c_0=0$; take derivatives and show $c_1=0$ and so on. See p. 17 of Johnston (2021)). So we cannot find any linear combination which is equal to 0 with at least one non-zero coefficient, implying that the set is not linearly dependent. So it's linearly independent!
A map between two vector spaces $\mathcal{V}$ and $\mathcal{W}$, $A: \mathcal{V} \rightarrow \mathcal{W}$, that preserves linearity $A (c_1 \ket{v_1} + c_2 \ket{v_2}) = c_1 A \ket{v_1} + c_2 A \ket{v_2}$. Generally, it has a matrix representation given by $A\ket{v_j}=\sum_i A_{ij}\ket{w_i}$. The representation depends on the basis.
A function $(,):\mathcal{V} \times \mathcal{V} \rightarrow \mathbb{C}$. Some properties:
$$ \begin{aligned} & (\ket{v},\lambda \ket{w}) =\lambda(\ket{v}, \ket{w}) \;(\text{linear in 2nd argument})\\ & (\lambda\ket{v}, \ket{w}) =\lambda^*(\ket{v}, \ket{w}) \;(\text{conjugate/anti-linear in 1st argument})\\ & (\ket{v},\ket{w}) =(\ket{w}, \ket{v})^*\\ & \braket{v}{w}=\begin{bmatrix}v_1^*v_2^*\end{bmatrix}\begin{bmatrix}v_1\\v_2\end{bmatrix}\; \text{in } \mathbb{C}^2 \end{aligned} $$First, a definition. The adjoint/Hermitian conjugate of an operator $A$ is the unique operator $A^\dagger$ that satisfies the relation $(\ket{v},A\ket{w})=(A^\dagger \ket{v},\ket{w})$ for any $\ket{v}$ and $\ket{w}$.
Some properties of the adjoint operator:
$$ \begin{aligned} & A^\dagger = (A^*)^{\rm T}\;\text{for matrix representation}\\ & AB^\dagger = B^\dagger A^\dagger\\ & (A^\dagger)^\dagger = A\\ & \left(\sum_i a_i A_i\right)^\dagger = \left(\sum_i a_i^* A_i^\dagger\right)\; \text{anti-linearity} \end{aligned} $$Normal operator: $A$ is normal if $A^\dagger A = A A^\dagger$. A normal matrix is Hermitian if and only if it has real eigenvalues. Eigenvectors of an Hermitian operators with distinct eigenvalues are orthogonal.
A self-adjoint/Hermitian operator satisfies $A^\dagger=A$.
A unitary operator is a normal operator with $A^\dagger A = A A^\dagger=I$. Since $U$ is normal, it's diagonalizable, too. $U$ preserves inner product $(U\ket{v}, U\ket{w}) = \ket{v}\ket{w}$. $U = \sum_i \ket{w_i}\bra{v_i}$
For a positive operator: $(\ket{v},A\ket{v})\geq 0$. If it is positive, then $A$ is called positive definite. $A^\dagger A$ is positive for any operator $A$.
A bra vector is a linear functional $\mathcal{V} \rightarrow \mathbb{C}$; i.e. it takes a ket vector to a complex number: $(\bra{v})\ket{w} = \braket{v}{w}$. Cohen Tannoudji (2020) showed that the space of all such linear functionals forms another vector space $V^*$—known as the dual space of $V$ (pp. 103–108). A bra vector is an element of $V^*$. A bra vector can also be represented as an adjoint vector: $\ket{v}^\dagger \equiv \bra{v}$.
Interestingly, for any ket vector in $V$, there exists a bra vector in $V^*$; but for any bra vector, there may not exist a ket vector if the ket belongs to an infinite-dimensional Hilbert space (Cohen Tannoudji 2020 showed a counter-example). For finite-dimensional vector spaces, they are of the same size, and so you'll always get kets corresponding to bras, though.
A basis set whose elements are orthonormal $\bra{i},\ket{j}=\delta_{ij}$ (like unit vectors). They are complete: $\sum_i \ket{i}\bra{i} = I$.
Gram–Schmidt method is used to make a linearly independent set of vectors an orthonormal basis set. This is done by normalizing the first vector from the linearly independent set to make it the first basis element, and then progressively making the other vectors orthonormal by removing the projections. Wikipedia has a neat illustration, check it. Also Savov (2020) explained it well. Todo: Add a better synopsis later.
Outer-product representation reduces to diagonal representation (eigendecomposition/orthonormal decomposition) when $\ket{v_i}, \ket{w_j}$ are orthonormal eigenvalues of $A$.
Interestingly, if the eigenvectors of $A: \mathcal{V} \rightarrow \mathcal{V}$ have non-degenerate eigenvalues, then they form a basis (eigenbasis) $A = \sum_i \lambda_i \ket{i}\bra{i}$. Example: our favorite Hamiltonian operator and it's energy eigenvalues: $H=\sum_E E\ket{E}\bra{E}$
If $\mathcal{V}$ and $\mathcal{W}$ are $m$ and $n$ dimensional vector spaces, then $\mathcal{V} \otimes \mathcal{W}$ (read as 'V tensor W') is an $mn$ dimensional vector space.
It obeys all properties of linear operators.
Notation: $\ket{\psi}^{\otimes 2} \equiv \ket{\psi}\otimes \ket{\psi}$
Tensor products of (Hermitian, unitary, positive, projector) operators retain those properties.
Gate | Matrix | Eigenpair | Properties |
---|---|---|---|
Identity ($\sigma_0, I$) | $$\begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix}$$ | $\{1, 1\}, \{\ket{0},\ket{1}\}$ | |
Pauli X / Bit flip ($\sigma_1, \sigma_x, X$) | $$\begin{bmatrix}0 & 1 \\ 1 & 0 \end{bmatrix}$$ | $\{1,-1\}, \{\ket{+},\ket{-}\}$ | $$\begin{aligned} & [\sigma_i,\sigma_j]=2i\sum_{l=1}^3\epsilon_{ijk}\sigma_l, \\ & \{\sigma_i,\sigma_j\}=0, \sigma_i^2=I \end{aligned}$$ |
Pauli Y ($\sigma_2, \sigma_y, Y$) | $$\begin{bmatrix}0 & -i \\ i & 0 \end{bmatrix}$$ | $\{1,-1\}, \{\ket{y_+},\ket{y_-}\}$ | |
Pauli Z / Phase flip ($\sigma_3, \sigma_z, Z$) | $$\begin{bmatrix}1 & 0 \\ 0 & -1 \end{bmatrix}$$ | $\{1,-1\}, \{\ket{0},\ket{1}\}$ | |
Phase ($P_\theta$) | $$\begin{bmatrix}1 & 0 \\ 0 & e^{i\theta} \end{bmatrix}$$ | ||
$S$ ($=P_{\pi/2}$) | $$\begin{bmatrix}1 & 0 \\ 0 & i \end{bmatrix}$$ | ||
$T$ ($=P_{\pi/4}$) | $$\begin{bmatrix}1 & 0 \\ 0 & \frac{1+i}{\sqrt{2}} \end{bmatrix}$$ | ||
Hadamard ($H$) | $$\frac{1}{\sqrt{2}} \begin{bmatrix}1 & 1 \\ 1 & -1 \end{bmatrix}$$ |