Quantum Machine Learning: Thinking and Exploration in Neural Network Models for Quantum Science and Quantum Computing
β Scribed by Claudio Conti
- Publisher
- Springer
- Year
- 2023
- Tongue
- English
- Leaves
- 393
- Series
- quantum science and technology
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
This book presents a new way of thinking about quantum mechanics and machine learning by merging the two. Quantum mechanics and machine learning may seem theoretically disparate, but their link becomes clear through the density matrix operator which can be readily approximated by neural network models, permitting a formulation of quantum physics in which physical observables can be computed via neural networks. As well as demonstrating the natural affinity of quantum physics and machine learning, this viewpoint opens rich possibilities in terms of computation, efficient hardware, and scalability. One can also obtain trainable models to optimize applications and fine-tune theories, such as approximation of the ground state in many body systems, and boosting quantum circuitsβ performance. The book begins with the introduction of programming tools and basic concepts of machine learning, with necessary background material from quantum mechanics and quantum information also provided. This enables the basic building blocks, neural network models for vacuum states, to be introduced. The highlights that follow include: non-classical state representations, with squeezers and beam splitters used to implement the primary layers for quantum computing; boson sampling with neural network models; an overview of available quantum computing platforms, their models, and their programming; and neural network models as a variational ansatz for many-body Hamiltonian ground states with applications to Ising machines and solitons. The book emphasizes coding, with many open source examples in Python and TensorFlow, while MATLAB and Mathematica routines clarify and validate proofs. This book is essential reading for graduate students and researchers who want to develop both the requisite physics and coding knowledge to understand the rich interplay of quantum mechanics and machine learning.
β¦ Table of Contents
Preface
Supplementary Code
Contents
Abbreviations and Acronyms
Definitions and Symbols
1 Quantum Mechanics and Data-Driven Physics
1.1 Introduction
1.2 Quantum Feature Mapping and the Computational Model
1.3 Example 1: The First Supervised Learning by Superconducting Qubits
1.4 Example 2: Photonic Quantum Processors, Boson Sampling, and Quantum Advantage
1.5 Quantum State Representations
1.6 Pros and Cons of Quantum Neural Networks
1.7 Quantum Mechanics and Kernel Methods
1.8 More on Kernel Methods
1.9 Coding Example of Kernel Classification
1.10 Support Vector Machine: The Widest Street Approach
1.11 The Link of Kernel Machines with the Perceptron
1.12 Kernel Classification of Nonlinearly Separable Data
1.13 Feature Mapping
1.14 The Drawbacks in Kernel Methods
1.15 Further Reading
References
2 Kernelizing Quantum Mechanics
2.1 Introduction
2.2 The Computational Model by Quantum Feature Maps and Quantum Kernels
2.3 Quantum Feature Map by Coherent States
2.4 Quantum Classifier by Coherent States
2.5 How to Measure Scalar Products with Coherent States
2.6 Quantum Feature Map by Squeezed Vacuum
2.7 A Quantum Classifier by Squeezed Vacuum
2.8 Measuring Scalar Products with General States: The SWAP Test
2.9 Boson Sampling and Quantum Kernels
2.10 Universality of Quantum Feature Maps and the Reproducing Kernel Theorem
2.11 Further Reading
References
3 Qubit Maps
3.1 Introduction
3.2 Feature Maps with Qubits
3.3 Using Tensors with Single Qubits
3.3.1 One-Qubit Gate as a Tensor
3.4 More on Contravariant and Covariant Tensors inTensorFlow
3.5 Hadamard Gate as a Tensor
3.6 Recap on Vectors and Matrices in TensorFlow
3.6.1 Matrix-Vector Multiplication as a Contraction
3.6.2 Lists, Tensors, Row, and Column Vectors
3.7 One-Qubit Gates
3.8 Scalar Product
3.9 Two-Qubit Tensors
3.9.1 Further Remarks on the Tensor Indices
3.10 Two-Qubit Gates
3.10.1 Coding Two-Qubit Gates
3.10.2 CNOT Gate
3.10.3 CZ Gate
3.11 Quantum Feature Maps with a Single Qubit
3.11.1 Rotation Gates
3.12 Quantum Feature Maps with Two Qubits
3.13 Coding the Entangled Feature Map
3.13.1 The Single Pass Feature Map
3.13.2 The Iterated Feature Map
3.14 The Entangled Kernel and Its Measurement with Two Qubits
3.15 Classical Simulation of the Entangled Kernel
3.15.1 A Remark on the Computational Time
3.16 Further Reading
References
4 One-Qubit Transverse-Field Ising Model and Variational Quantum Algorithms
4.1 Introduction
4.2 Mapping Combinatorial Optimization to the Ising Model
4.2.1 Number Partitioning
4.2.2 Maximum Cut
4.3 Quadratic Unconstrained Binary Optimization (QUBO)
4.4 Why Use a Quantum Computer for CombinatorialOptimization?
4.5 The Transverse-Field Ising Model
4.6 One-Qubit Transverse-Field Ising Model
4.6.1 h=0
4.6.2 J=0
4.7 Coding the One-Qubit Transverse-Field Ising Hamiltonian
4.7.1 The Hamiltonian as a tensor
4.7.2 Variational Ansatz as a tensor
4.8 A Neural Network Layer for the Hamiltonian
4.9 Training the One-Qubit Model
4.10 Further Reading
References
5 Two-Qubit Transverse-Field Ising Model and Entanglement
5.1 Introduction
5.2 Two-Qubit Transverse-Field Ising Model
5.2.1 h0=h1=0
5.2.2 Entanglement in the Ground State with No External Field
5.2.3 h0=h1=hβ 0
5.3 Entanglement and Mixtures
5.4 The Reduced Density Matrix
5.5 The Partial Trace
5.6 One-Qubit Density Matrix Using Tensors
5.7 Coding the One-Qubit Density Matrix
5.8 Two-Qubit Density Matrix by Tensors
5.9 Coding the Two-Qubit Density Matrix
5.10 Partial Trace with tensors
5.11 Entropy of Entanglement
5.12 Schmidt Decomposition
5.13 Entropy of Entanglement with tensors
5.14 Schmidt Basis with tensors
5.15 Product States and Maximally Entangled States with tensors
5.16 Entanglement in the Two-Qubit TIM
5.16.1 h0=h and h1=0
5.17 Further Reading
References
6 Variational Algorithms, Quantum Approximate Optimization Algorithm, and Neural Network Quantum States with Two Qubits
6.1 Introduction
6.2 Training the Two-Qubit Transverse-Field Ising Model
6.3 Training with Entanglement
6.4 The Quantum Approximate Optimization Algorithm
6.5 Neural Network Quantum States
6.6 Further Reading
References
7 Phase Space Representation
7.1 Introduction
7.2 The Characteristic Function and Operator Ordering
7.3 The Characteristic Function in Terms of Real Variables
7.4 Gaussian States
7.4.1 Vacuum State
7.4.2 Coherent State
7.4.3 Thermal State
7.4.4 Proof of Eq.(7.42)
7.5 Covariance Matrix in Terms of the Derivatives of Ο
7.5.1 Proof of Eqs.(7.72) and (7.73) for General States
7.5.2 Proof of Eqs.(7.72) and (7.73) for a Gaussian State
7.6 Covariance Matrices and Uncertainties
7.6.1 The Permuted Covariance Matrix
Proof of Eq.(7.109)
7.6.2 Ladder Operators and Complex Covariance matrix
Proof of Eq.(7.123)
7.7 Gaussian Characteristic Function
7.7.1 Remarks on the shape of a Vector
7.8 Linear Transformations and Symplectic Matrices
7.8.1 Proof of Eqs.(7.141) and (7.142)
7.9 The U and M Matrices
7.9.1 Coding the Matrices Rp and Rq
7.10 Generating a Symplectic Matrix for a Random Gate
7.11 Further Reading
References
8 States as a Neural Networks and Gates as Pullbacks
8.1 Introduction
8.2 The Simplest Neural Network for a Gaussian State
8.3 Gaussian Neural Network with Bias Input
8.4 The Vacuum Layer
8.5 Building a Model and the ``Bug Train''
8.6 Pullback
8.7 The Pullback Layer
8.8 Pullback of Gaussian States
8.9 Coding the Linear Layer
8.10 Pullback Cascading
8.11 The Glauber Displacement Layer
8.12 A Neural Network Representation of a Coherent State
8.13 A Linear Layer for a Random Interferometer
Reference
9 Quantum Reservoir Computing
9.1 Introduction
9.2 Observable Quantities as Derivatives of Ο
9.3 A Coherent State in a Random Interferometer
9.4 Training a Complex Medium for an Arbitrary Coherent State
9.5 Training to Fit a Target Characteristic Function
9.6 Training by First Derivatives
9.7 Training by Second Derivatives
9.7.1 Proof of Eq.(9.5)
9.8 Two Trainable Interferometers and a Reservoir
9.9 Phase Modulator
9.10 Training Phase Modulators
9.11 Further Reading
References
10 Squeezing, Beam Splitters, and Detection
10.1 Introduction
10.2 The Generalized Symplectic Operator
10.3 Single-Mode Squeezed State
10.4 Multimode Squeezed Vacuum Model
10.5 Covariance Matrix and Squeezing
10.6 Squeezed Coherent States
10.6.1 Displacing the Squeezed Vacuum
10.6.2 Squeezing the Displaced Vacuum
10.7 Two-Mode Squeezing Layer
10.8 Beam Splitter
10.9 The Beam Splitter Layer
10.10 Photon Counting Layer
10.11 Homodyne Detection
10.12 Measuring the Expected Value of the Quadrature Operator
References
11 Uncertainties and Entanglement
11.1 Introduction
11.2 The HeisenbergLayer for General States
11.2.1 The LaplacianLayer
11.2.2 The BiharmonicLayer
11.2.3 The HeisenbergLayer
11.3 Heisenberg Layer for Gaussian States
11.4 Testing the HeinsenbergLayer with a Squeezed State
11.5 Proof of Eqs.(11.4) and (11.5) and (11.9)
11.6 DifferentialGaussianLayer
11.7 Uncertainties in Homodyne Detection
11.7.1 Proof of Eqs.(11.39) and (11.40)
11.8 Uncertainties for Gaussian States
11.9 DifferentialGaussianLayer on Coherent States
11.10 DifferentialGaussianLayer in Homodyne Detection
11.11 Entanglement with Gaussian States
11.12 Beam Splitters as Entanglers
11.13 Entanglement by the Reduced Characteristic Function
11.14 Computing the Entanglement
11.15 Training the Model to Maximize the Entanglement
11.16 Further Reading
References
12 Gaussian Boson Sampling
12.1 Introduction
12.2 Boson Sampling in a Single Mode
12.3 Boson Sampling with Many Modes
12.4 Gaussian States
12.5 Independent Coherent States
12.6 Zero Displacement Case in Complex Variables and the Hafnian
12.6.1 The Resulting Algorithm
12.6.2 Proof of Eq.(12.39)
12.7 The Q-Transform
12.8 The Q-Transform Layer
12.9 The Multiderivative Operator
12.10 Single-Mode Coherent State
12.11 Single-Mode Squeezed Vacuum State
12.12 Multimode Coherent Case
12.13 A Coherent Mode and a Squeezed Vacuum
12.14 A Squeezed Mode and a Coherent Mode in a Random Interferometer
12.15 Gaussian Boson Sampling with Haar Random Unitary Matrices
12.15.1 The Haar Random Layer
12.15.2 A Model with a Varying Number of Layers
12.15.3 Generating the Sampling Patterns
12.15.4 Computing the Pattern Probability
12.16 Training Boson Sampling
12.17 The Loss Function
12.18 Trainable GBS Model
12.19 Boson Sampling the Model
12.20 Training the Model
12.21 Further Reading
References
13 Variational Circuits for Quantum Solitons
13.1 Introduction
13.2 The Bose-Hubbard Model
13.3 Ansatz and Quantum Circuit
13.3.1 Proof of Eq.(13.7)
13.3.2 Proof of Eq.(13.8)*
13.4 Total Number of Particles in Real Variables
13.5 Kinetic Energy in Real Variables
13.6 Potential Energy in Real Variables
13.7 Layer for the Particle Number
13.8 Layer for the Kinetic Energy
13.9 Layer for the Potential Energy
13.10 Layer for the Total Energy
13.11 The Trainable Boson Sampling Ansatz
13.12 Connecting the BSVA to the Measurement Layers
13.13 Training for the Quantum Solitons
13.14 Entanglement of the Single Soliton
13.15 Entangled Bound States of Solitons
13.16 Boson Sampling
13.17 Conclusion
13.18 Further Reading
References
Index
π SIMILAR VOLUMES
Elevate your problem-solving prowess by using cutting-edge quantum machine learning algorithms in the financial domain Purchase of the print or Kindle book includes a free PDF eBook Key Features Learn to solve financial analysis problems by harnessing quantum power Unlock the benefits of qua
<p><span>Elevate your problem-solving prowess by using cutting-edge quantum machine learning algorithms in the financial domain</span></p><p><span>Purchase of the print or Kindle book includes a free PDF eBook</span></p><h4><span>Key Features</span></h4><ul><li><span><span>Learn to solve financial a
<p><span>Quantum Communication, Quantum Networks, and Quantum Sensing</span><span> represents a self-contained introduction to quantum communication, quantum error-correction, quantum networks, and quantum sensing. It starts with basic concepts from classical detection theory, information theory, an