### Lecture 3: Quantum simulation algorithms

```Lecture 3: Quantum
simulation algorithms
Dominic Berry
Macquarie University
1996
Simulation of Hamiltonians

We want to simulate the evolution
=  − |0 〉

The Hamiltonian is a sum of terms:

=
ℓ
ℓ=1
Seth Lloyd

We can perform
−ℓ

For short times we can use
−1  −2  …  −−1  −  ≈  −

For long times
−1 /  −2 / …  − /

≈  −
Simulation of Hamiltonians

1996
For short times we can use
−1  −2  …  −−1  −  ≈  −




This approximation is because
−1  −2  …  −−1  −
=  − 1  +   2  − 2  +   2 …
…  −   +   2
Seth Lloyd
=  − 1  − 2  … −   +   2
=  −  +   2
=  − + ( 2 )
If we divide long time  into  intervals, then

− =  −/ =  −1 /  −2/ …  − / +  / 2

−
/
−
/
−
/
1
2

=

…
+   2 /
Typically, we want to simulate a system with some maximum
allowable error .
Then we need
∝  2 /.
2007
Higher-order simulation

Berry, Ahokas,
Cleve, Sanders
A higher-order decomposition is
−1 /2 …  −−1/2  − /2  − /2  −−1/2 …  −1/2
=  − +     3

If we divide long time  into  intervals, then

− =  −/

=  −1 /2 …  − /2  − /2 …  −1/2 +    / 3

=  −1 /2 …  − /2  − /2 …  −1/2 +     3 / 2

Then we need

General product formula can give error    /

For time  the error is

To bound the error as  the value of  scales as
1+1/2
∝
1/2
The complexity is  × .

1.5 /
∝
2+1 / 2
.
.
2+1
for time /.
Higher-order simulation
2007
Berry, Ahokas,
Cleve, Sanders
1+1/2
∝
1/2





The complexity is  × .
For Sukuki product formulae, we have an additional factor in
5    1+1/2
∝
1/2
The complexity then needs to be multiplied by a further factor of 5 .
The overall complexity scales as
52    1+1/2
1/2
We can also take an optimal value of  ∝ log   / , which gives
scaling
2   exp[2 ln 5 ln(  /)]
Solving linear systems

Consider a large system of linear equations:
=

First assume that the matrix  is Hermitian.

It is possible to simulate Hamiltonian evolution
under  for time :  − .

Encode the initial state in the form
2009

=
ℓ |ℓ〉
ℓ=1

The state can also be written in terms of the eigenvectors of  as

=
| 〉
=1

We can obtain the solution |〉 if we can divide each  by  .

Use the phase estimation technique to place the estimate of  in
an ancillary register to obtain

| 〉| 〉
=1
Harrow, Hassidim
& Lloyd
Solving linear systems

2009
Use the phase estimation technique to place the
estimate of  in an ancillary register to obtain

| 〉| 〉
=1

Append an ancilla and rotate it according to the
value of  to obtain

| 〉| 〉
=1

Invert the phase estimation technique to remove the estimate of
from the ancillary register, giving

=1

1
1
0 + 1 − 2 |1〉

1
1
| 〉
0 + 1 − 2 |1〉

Use amplitude amplification to amplify the |0〉 component on the
ancilla, giving a state proportional to

∝
=1

| 〉 =

ℓ |ℓ〉
ℓ=1
Harrow, Hassidim
& Lloyd
Solving linear systems


Construct a blockwise matrix
0
′ = †

0

The inverse of ′ is then
′ −1 = 0
−1


This means that
0
−1
−1
†
0
†
0
−1

0
=
0

In terms of the state
|0〉|〉 ↦ |1〉|〉
2009
Harrow, Hassidim
& Lloyd
Solving linear systems
2009
Complexity Analysis

We need to examine:
1.
The complexity of simulating the Hamiltonian to
estimate the phase.
2.
The accuracy needed for the phase estimate.
3.
The possibility of 1/ being greater than 1.

The complexity of simulating the Hamiltonian for
time  is approximately ∝   = |max |.

To obtain accuracy  in the estimate of , the
Hamiltonian needs to be simulated for time ∝ 1/.

We actually need to multiply the state coefficients
by min / , to give

=1

|min |
| 〉

To obtain accuracy  in min / , we need
accuracy 2 / min in the estimate of .
Harrow, Hassidim
& Lloyd
Final complexity is
2
|max |
∼ ,
: =

|min |
2010
Differential equations


Berry
Discretise the differential equation, then encode as a linear system.
Simplest discretisation: Euler method.
dx
 Ax  b
dt
sets initial condition
x j 1  x j

I
0
0

 ( I  Ah)
I
0

0
( I  Ah) I


0
0
I


0
0
0
sets x to be constant
h
0
0
0
I
I
 Ax j  b
0   x 0   xin 
0   x1   bh 
   
0   x 2    bh 
   
0   x3   0 
I   x 4   0 
Quantum walks



The quantum walk has position
and coin values
|, 〉
It then alternates coin and step
operators, e.g.
, ±1 = , −1 ± , 1 / 2
,  = | + , 〉
The position can progress
linearly in the number of steps.

A classical walk has a position which is an
integer, , which jumps either to the left or the
right at each step.

The resulting distribution is a binomial
distribution, or a normal distribution as the
limit.
Quantum walk on a graph

The walk position is any node on
the graph.

Describe the generator matrix  by
,
≠ ′ , ′ ∈
0,
′ =
≠ ′ , ′ ∉
−  ,
= ′

The quantity () is the number of
edges incident on vertex .


An edge between  and  ′ is
denoted ′ .
The probability distribution for a
continuous walk has the differential
equation

=
′ ′ ()

′

1998
Quantum walk on a graph

=


Farhi
′ ′ ()
′
Quantum mechanically we have

= |  〉

〈   =
′ 〈′|  〉

′


The natural quantum analogue is
′ = ′

We take


′
,
= 0,
≠ ′ , ′ ∈
otherwise.
Probability is conserved because
is Hermitian.
Quantum walk on a graph



entrance
Childs, Farhi,
Gutmann
The goal is to traverse the graph
from entrance to exit.
Classically the random walk will
take exponential time.
For the quantum walk, define a
superposition state
1
col  =
|〉
∈column
exit
=

2002
2
22+1−
0≤≤
+ 1 ≤  ≤ 2 + 1
On these states the matrix
elements of the Hamiltonian are
col   col  ± 1 = 2
Quantum walk on a graph
entrance
2003
Childs, Cleve, Deotto,
Farhi, Gutmann,
Spielman

between the two trees.

All vertices (except entrance
and exit) have degree 3.

Again using column states, the
matrix elements of the
Hamiltonian are
exit
col   col  ± 1
=
2
2
≠
=

This is a line with a defect.

There are reflections off the
defect, but the quantum walk
still reaches the exit efficiently.
2007
NAND tree quantum walk
Farhi, Goldstone,
Gutmann

In a game tree I alternate making moves with
an opponent.

In this example, if I move first then I can
always direct the ant to the sugar cube.

What is the complexity of doing this in
general? Do we need to query all the leaves?
AND
OR
AND
1
OR
AND
AND
2
3
4
5
AND
6
7
8
2007
NAND tree quantum walk
OR
OR
AND
1
NOT
AND
2
3
NOT
AND
4
1
NAND
NAND
1
Farhi, Goldstone,
Gutmann
NAND
2
3
4
NOT
NOT
AND
2
3
4
NAND tree quantum walk
2007
Farhi, Goldstone,
Gutmann
wave

The Hamiltonian is a sum of an oracle Hamiltonian, representing the
connections, and a fixed driving Hamiltonian, which is the remainder
of the tree.
=  +

Prepare a travelling wave packet on the left.

If the answer to the NAND tree problem is 1, then after a fixed time
the wave packet will be found on the right.

The reflection depends on the solution of the NAND tree problem.
Simulating quantum walks

A more realistic scenario is that we have
an oracle that provides the structure of the
graph; i.e., a query to a node returns all
the nodes that are connected.

The quantum oracle is queried with a
node number  and a neighbour number .

It returns a result via the quantum
operation
,  |0〉 = ,  |〉

wave
Here  is the ’th neighbour of .
|, 〉

|0〉
connected nodes
query node
|, 〉
|〉
Decomposing the Hamiltonian




0
0
1
The rows and columns correspond to
0
node numbers.
=
0
The ones indicate connections
1
between nodes.
⋮
The oracle gives us the position of
0
In the matrix picture, we have a
sparse matrix.
the ’th nonzero element in column .
0
1
0
0
0
1
⋮
0
1
0
0
0
0
0
⋮
1
0
0
0
1
1
0
⋮
0
0
0
0
1
1
0
⋮
0
1
1
0
0
0
0
⋮
0
2003
Aharonov,
Ta-Shma
⋯
⋯
⋯
⋯
⋯
⋯
⋱
⋯
0
0
1
0
0
0
⋮
1
Decomposing the Hamiltonian




0
0
1
The rows and columns correspond to
0
node numbers.
=
0
The ones indicate connections
1
between nodes.
⋮
The oracle gives us the position of
0
In the matrix picture, we have a
sparse matrix.
the ’th nonzero element in column .

We want to be able to separate the
Hamiltonian into 1-sparse parts.

This is equivalent to a graph
colouring – the graph edges are
coloured such that each node has
unique colours.
0
1
0
0
0
1
⋮
0
1
0
0
0
0
0
⋮
1
0
0
0
1
1
0
⋮
0
0
0
0
1
1
0
⋮
0
1
1
0
0
0
0
⋮
0
2003
Aharonov,
Ta-Shma
⋯
⋯
⋯
⋯
⋯
⋯
⋱
⋯
0
0
1
0
0
0
⋮
1
2007
Graph colouring




0
0
First guess: for each node, assign
1
edges sequentially according to their
0
numbering.
=
0
This does not work because the edge
1
between nodes  and  may be edge
⋮
1 (for example) of , but edge 2 of .
0
How do we do this colouring?
Second guess: for edge between
and , colour it according to the pair
of numbers ( ,  ), where it is edge
of node  and edge  of node .

We decide the order such that  < .

It is still possible to have ambiguity:
say we have  <  < .
Berry, Ahokas,
Cleve, Sanders
0
1
0
0
0
1
⋮
0
1
0
0
0
0
0
⋮
1
0
0
0
1
1
0
⋮
0
0
0
0
1
1
0
⋮
0
1
1
0
0
0
0
⋮
0
⋯
⋯
⋯
⋯
⋯
⋯
⋱
⋯
0
0
1
0
0
0
⋮
1
2007
Graph colouring




Berry, Ahokas,
Cleve, Sanders
0
0
First guess: for each node, assign
1
edges sequentially according to their
0
numbering.
=
0
This does not work because the edge
1
between nodes  and  may be edge
⋮
1 (for example) of , but edge 2 of .
0
How do we do this colouring?
Second guess: for edge between
and , colour it according to the pair
of numbers ( ,  ), where it is edge
of node  and edge  of node .

We decide the order such that  < .

It is still possible to have ambiguity:
say we have  <  < .
0
1
0
0
0
1
⋮
0

3
1
0
0
0
0
0
⋮
1
0
0
0
1
1
0
⋮
0
0
0
0
1
1
0
⋮
0
1
1
0
0
0
0
⋮
0
0
0
1
0
0
0
⋮
1
2
1
(1,2)
3
2
(1,2)
1
2
3

⋯
⋯
⋯
⋯
⋯
⋯
⋱
⋯

1
Use a string of nodes with equal
edge colours, and compress.
General Hamiltonian oracles
0
0
2
0
=
0
− 2
⋮
0


0
3
0
0
0
1/2
⋮
0
−
2
0
0
0
0
0
0
0
0
0
⋮
3−
0
1
0
−/7
0
⋮
0
More generally, we can perform a
colouring on a graph with matrix
elements of arbitrary (Hermitian)
values.
Then we also require an oracle to
give us the values of the matrix
elements.
/7
2
0
⋮
0
|, 〉
|0〉
|, 〉
|0〉
2 ⋯
1/2 ⋯
0
0
0
0
⋮
0
2003
Aharonov,
Ta-Shma
0
0
⋯ − 3+
⋯
0
⋯
0
⋯
0
⋱
⋮
⋯
1/10

|, 〉
|〉
|, 〉
|, 〉
Simulating 1-sparse case
0
0
0
0
=
0
− 2
⋮
0
0
3
0
0
0
0
⋮
0
0
0
0
0
0
0
⋮
− 3−
0
0
0
1
0
0
⋮
0
0
0
0
0
2
0
⋮
0
2
0
0
0
0
0
⋮
0
2003
Aharonov,
Ta-Shma
⋯
0
⋯
0
⋯ − 3+
⋯
0
⋯
0
⋯
0
⋱
⋮
⋯
0

Assume we have a 1-sparse matrix.

How can we simulate evolution under this Hamiltonian?

Two cases:
1.
If the element is on the diagonal, then we have a 1D subspace.
2.
If the element is off the diagonal, then we need a 2D subspace.
Simulating 1-sparse case

2003
Aharonov,
Ta-Shma
We are given a column number . There are then 5 quantities
that we want to calculate:
1.
: A bit registering whether the element is on or off the
diagonal; i.e.  belongs to a 1D or 2D subspace.
2.
: The minimum number out of the (1D or 2D) subspace to
which  belongs.
3.
: The maximum number out of the subspace to which
belongs.
4.
: The entries of  in the subspace to which  belongs.
5.
: The evolution under  for time  in the subspace.

We have a unitary operation that maps
0 →  | ,  ,  ,  ,  〉
Simulating 1-sparse case
2003
Aharonov,
Ta-Shma

We have a unitary operation that maps
0 →  | ,  ,  ,  ,  〉

We consider a superposition of the two states in the subspace,
=   +

Then we obtain
|0〉 → |〉| ,  ,  ,  ,  〉

A second operation implements the controlled operation based
on the stored approximation of the unitary operation  :
|〉  ,  ,  →  |〉  ,  ,

This gives us
|〉| ,  ,  ,  ,  〉

Inverting the first operation then yields
0
Applications

2007: Discrete query NAND algorithm – Childs, Cleve, Jordan, Yeung

2009: Solving linear systems – Harrow, Hassidim, Lloyd

2009: Implementing sparse unitaries – Jordan, Wocjan

2010: Solving linear differential equations – Berry

2013: Algorithm for scattering cross section – Clader, Jacobs, Sprouse
Implementing unitaries

2009
Jordan, Wocjan
Construct a Hamiltonian from unitary as
0
= †

0

Now simulate evolution under this Hamiltonian
− =  cos  −  sin

Simulating for time  = /2 gives
−/2 1  = − 1
= −|0〉|〉
Quantum simulation via walks

Three ingredients:
1. A Szegedy quantum walk
2. Coherent phase estimation
3. Controlled state preparation

The quantum walk has eigenvalues and
eigenvectors related to those for Hamiltonian.
By using phase estimation, we can estimate the
eigenvalue, then implement that actually
needed.

Szegedy Quantum Walk

2004
Szegedy
The walk uses two reflections
2 † −
2† −


The first is controlled by the first register and acts on the
second register.
Given some matrix [, ], the operator  is defined by

=
[, ]|〉
=1

=
〈| ⊗ | 〉
=1
Szegedy Quantum Walk


2004
Szegedy
The diffusion operator 2† −  is controlled by the
second register and acts on the first. Use a similar
definition with matrix [, ].
Both are controlled reflections:

2 † −  =
〈| ⊗ (2| 〉〈 | − )
=1

2† −  =
(2| 〉〈 | − ) ⊗  〈|
=1

The eigenvalues and eigenvectors of the step of the
quantum walk
(2 † − )(2† − )
are related to those of a matrix formed from [, ] and
[, ].
2012
Szegedy walk for simulation
Berry,
Childs

Use symmetric system, with
∗
,  =  ,  =

Then eigenvalues and eigenvectors are related to
those of Hamiltonian.

In reality we need to modify to “lazy” quantum walk,
with
=


1
∗

|〉
=1

+ 1−
| + 1〉
1

≔
Grover preparation gives
=
1

=1

0 + 1 −  2 |1〉
| |
=1
Szegedy walk for simulation
2012
Berry,
Childs

Three step process:
1. Start with state in one of the subsystems, and perform controlled state
preparation.
2. Perform steps of quantum walk to approximate Hamiltonian evolution.
3. Invert controlled state preparation, so final state is in one of the
subsystems.

Step 2 can just be performed with small  for lazy quantum walk, or can
use phase estimation.

A Hamiltonian has eigenvalues , so evolution
under the Hamiltonian has eigenvalues
−
is the step of a quantum walk, and has
eigenvalues
= ± ± arcsin

The complexity is the
maximum of

max

```