<span>Signal Processing and Machine Learning Theory</span><span>, authored by world-leading experts, reviews the principles, methods and techniques of essential and advanced signal processing theory. These theories and tools are the driving engines of many current and emerging research topics and te
Signal Processing and Machine Learning Theory
โ Scribed by Paulo S.R. Diniz (editor)
- Publisher
- Academic Press
- Year
- 2024
- Tongue
- English
- Leaves
- 1235
- Category
- Library
No coin nor oath required. For personal study only.
โฆ Synopsis
Signal Processing and Machine Learning Theory, authored by world-leading experts, reviews the principles, methods and techniques of essential and advanced signal processing theory. These theories and tools are the driving engines of many current and emerging research topics and technologies, such as machine learning, autonomous vehicles, the internet of things, future wireless communications, medical imaging, etc.
- Provides quick tutorial reviews of important and emerging topics of research in signal processing-based tools
- Presents core principles in signal processing theory and shows their applications
- Discusses some emerging signal processing tools applied in machine learning methods
- References content on core principles, technologies, algorithms and applications
- Includes references to journal articles and other literature on which to build further, more specific, and detailed knowledge
โฆ Table of Contents
Front Cover
Signal Processing and Machine Learning Theory
Copyright
Contents
List of contributors
Contributors
Chapter 1
Chapter 2
Chapter 3
Chapter 4
Chapter 5
Chapter 6
Chapter 7
Chapter 8
Chapter 9
Chapter 10
Chapter 11
Chapter 12
Chapter 13
Chapter 14
Chapter 15
Chapter 16
Chapter 17
Signal processing and machine learning theory
1 Introduction to signal processing and machine learning theory
1.1 Introduction
1.2 Continuous-time signals and systems
1.3 Discrete-time signals and systems
1.4 Random signals and stochastic processes
1.5 Sampling and quantization
1.6 FIR and IIR filter design
1.7 Digital filter structures and implementations
1.8 Multirate signal processing
1.9 Filter banks and transform design
1.10 Discrete multiscale and transforms
1.11 Frames
1.12 Parameter estimation
1.13 Adaptive filtering
1.14 Machine learning: review and trends
1.15 Signal processing over graphs
1.16 Tensor methods in deep learning
1.17 Nonconvex graph learning: sparsity, heavy tails, and clustering
1.18 Dictionaries in machine learning
1.19 Closing comments
References
2 Continuous-time signals and systems
2.1 Introduction
2.2 Continuous-time systems
2.3 Differential equations
2.4 Laplace transform: definition and properties
2.5 Transfer function and stability
2.6 Frequency response
2.7 The Fourier series and the Fourier transform
2.8 Conclusion and future trends
Relevant websites
Glossary
Nomenclature
References
3 Discrete-time signals and systems
3.1 Introduction
3.2 Discrete-time signals: sequences
3.3 Discrete-time systems
3.3.1 Classification
Memoryless systems
Dynamic systems
Linear systems
Nonlinear systems
Time-invariant systems
Time-variant systems
Causal systems
3.4 Linear time-invariant systems
3.4.1 State-space description
3.4.2 Transfer function of a discrete-time LTI system
3.4.3 Finite duration impulse response systems
3.4.4 Infinite duration impulse response systems
3.4.5 Observability and controllability
3.4.6 Stability
3.5 Discrete-time signals and systems with MATLABยฎ
3.5.1 Discrete-time signals
Unit impulse
Sinusoid
White Gaussian noise
Elaborated signal model
3.5.2 Discrete-time systems representation and implementation
3.6 Conclusions
Notation
References
4 Random signals and stochastic processes
Foreword
Acknowledgment
4.1 Introduction
4.2 Probability
4.2.1 Joint, conditional, and total probability โ Bayes' rule
4.2.2 Probabilistic independence
4.2.3 Combined experiments โ Bernoulli trials
4.3 Random variable
4.3.1 Probability distributions
4.3.2 Usual distributions
4.3.3 Conditional distribution
4.3.4 Statistical moments
4.3.5 Transformation of random variables
4.3.6 Multiple random variable distributions
4.3.7 Statistically independent random variables
4.3.8 Joint statistical moments
4.3.9 Central limit theorem
4.3.10 Multivariate Gaussian distribution
4.3.11 Transformation of vector random variables
4.3.12 Complex random variables
4.3.13 Application: estimators
4.3.14 Application: feature selection
4.4 Random process
4.4.1 Distributions of random processes and sequences
4.4.2 Statistical independence
4.4.3 First- and second-order moments for random processes and sequences
4.4.4 Stationarity
4.4.5 Properties of correlation functions for WSS processes and sequences
4.4.6 Time averages of random processes and sequences
4.4.7 Ergodicity
4.4.8 An encompassing example
4.4.9 Gaussian processes and sequences
4.4.10 Poisson random process
4.4.11 Complex random processes and sequences
4.4.12 Markov chains
4.4.13 Spectral description of random processes and sequences
4.4.14 White and colored noise
4.4.15 Applications: modulation, bandpass,'' and band-limited processes, and sampling
4.4.16 Processing of random processes and sequences
4.4.17 Application: characterization of LTI systems
4.4.18 Modeling of bandpass WSS random processes
4.4.19 Statistical modeling of signals: random sequence as the output of an LTI system
4.5 Afterword
References
5 Sampling and quantization
5.1 Introduction
5.1.1 Scope and prerequisites
5.1.2 Chapter outline
5.1.3 Recent and current trends
5.2 Preliminaries
5.2.1 Classification of signals
5.2.2 Discrete-time signals โ sequences
5.2.3 Sampling of continuous-time signals
5.2.4 Classification of systems
5.2.5 Digital signal processing of analog signals
5.2.6 Digital communication โ analog signal processing of digital signals
5.3 Sampling of deterministic signals
5.3.1 Uniform sampling
5.3.2 Poisson's summation formula
5.3.2.1 Example 1: illustration of Poisson's summation formula
5.3.3 The sampling theorem
5.3.3.1 Example 2: illustration of aliasing
5.3.4 Antialiasing filter
5.3.5 Reconstruction
5.3.5.1 Ideal reconstruction
5.3.5.2 Reconstruction using a D/A converter and an analog reconstruction filter
5.3.6 Distortion caused by undersampling
5.3.6.1 Example 3: illustration of distortion due to undersampling
5.3.7 Distortion measure for energy signals
5.3.7.1 Example 4: illustration of distortion measure
5.3.8 Bandpass sampling
5.4 Sampling of stochastic processes
5.4.1 Uniform sampling
5.4.2 Reconstruction of stochastic processes
5.4.2.1 Example 5: computation of reconstruction error power
5.5 Nonuniform sampling and generalizations
5.5.1 Time-interleaved ADCs
5.5.2 Problem formulation
5.5.2.1 Relaxed problem
5.5.3 Reconstruction
5.5.4 Reconstruction using a time-varying FIR system
5.5.5 Error metrics
5.5.5.1 DC offset
5.5.5.2 Example 6: effect of oversampling
5.5.6 Reconstruction based on least-squares design
5.5.7 Reconstruction using a polynomial-based approach
5.5.8 Performance discussion
5.5.8.1 Example 7: frequency response mismatch correction
5.5.8.2 Example 8: frequency response mismatch correction in the presence of nonlinearities
5.6 Quantization
5.6.1 Quantization errors in A/D conversion
5.6.2 Round-off errors
5.6.2.1 Example 9: quantization and round-off noise in an IIR filter
5.6.2.2 Example 10: quantization and round-off noise in an FIR filter
5.7 Oversampling techniques and MIMO systems
5.7.1 Oversampled A/D converters
5.7.2 Oversampled D/A converters
5.7.3 Multiple-input multiple-output systems
5.7.3.1 Example 11: correlated channel quantization errors
5.8 Discrete-time modeling of mixed-signal systems
5.8.1 Input/output relations
5.8.2 Determining G(z)
5.8.3 Stability
5.8.4 Frequency responses H(jฮฉ) and G(ejฮฉT)
5.8.5 Example 12: continuous-time-input ฮฃฮ-modulator
5.8.6 Example 13: single feedback loop system
References
6 Digital filter structures and their implementation
6.1 Introduction
6.1.1 Nonrecursive FIR filters
6.1.2 Analog filters
6.1.3 Wave digital filters
6.1.4 Implementation of digital filters
6.1.5 MATLABยฎ toolbox
6.2 Properties of digital filters
6.2.1 Design target
6.3 Synthesis of digital FIR filters
6.3.1 REMEZ_FIR
6.3.1.1 Example
6.3.2 Notch FIR filters
6.3.3 Half-band FIR filters
6.3.4 Complementary FIR filters
6.3.5 Minimum-phase FIR filters
6.3.6 Miscellaneous FIR filters
6.4 FIR structures
6.4.1 Direct-form FIR structures
6.4.1.1 Adder tree
6.4.2 Transposed direct form
6.4.3 Linear-phase FIR structures
6.4.4 Half-band FIR filters
6.4.5 Delay-complementary FIR structure
6.4.6 Cascade-form FIR structures
6.4.7 Lattice FIR structures
6.4.8 Recursive frequency-sampling structures
6.4.9 Multirate FIR filters
6.5 Frequency response masking filters
6.5.1 Basic FRM structure
6.5.1.1 Case A
6.5.1.2 Case B
6.5.1.3 Conditions for feasible FRM solutions
6.5.1.4 Selection of L
6.5.1.5 Computational complexity
6.5.1.6 Related structure
6.5.1.7 Design procedure
6.5.1.8 Design example
6.5.2 Multistage FRM structure
6.5.2.1 Exercise
6.5.3 Half-band filter
6.5.3.1 Structure
6.5.3.2 Design procedure
6.5.3.3 Design example
6.5.4 Hilbert transformer
6.5.4.1 Exercise
6.5.5 Decimator and interpolator
6.5.5.1 Difficulties of FRM multirate filters
6.5.5.2 A variant of the FRM structure
6.5.5.3 Design example
6.5.5.4 Exercise
6.5.5.5 Two more variants of FRM structures
6.5.5.6 An alternative recursive structure
6.5.6 Filter banks
6.5.7 Cosine-modulated transmultiplexer
6.5.8 FRM structure for recursive filters
6.5.8.1 Design example
6.5.9 2D FRM structure
6.5.10 Summary
6.6 The analog approximation problem
6.6.1 Typical requirements
6.6.2 Standard low-pass approximations
6.6.3 Comparison of the standard approximations
6.6.3.1 Example
6.6.4 Filters with constant pole radius
6.6.5 Frequency transformations
6.7 Doubly resistively terminated lossless networks
6.7.1 Maximal power transfer
6.7.2 Reflection function
6.7.3 Element sensitivity
6.7.4 Errors in the elements in doubly terminated filters
6.7.4.1 Example
6.7.5 Filters with diminishing ripple
6.7.5.1 Example
6.8 Design of doubly resistively terminated analog filters
6.8.1 Ladder structures
6.8.1.1 Structures for low-pass filters
6.8.1.2 Design of ladder structures
6.8.1.3 Example
6.8.2 Analog lattice structures
6.8.2.1 Wave description of two-ports
6.8.3 Realization of lossless one-ports
6.8.3.1 Richards' structures
6.8.3.2 Example
6.8.4 Commensurate length transmission line networks
6.8.5 Richards' variable
6.8.6 Unit elements
6.8.6.1 ZL=Z0 (matched termination)
6.8.6.2 ZL=โ (open-ended)
6.8.6.3 ZL=0 (short-circuited)
6.9 Design and realization of IIR filters
6.10 Wave digital filters
6.10.1 Wave descriptions
6.10.1.1 Voltage waves
6.10.1.2 Current waves
6.10.1.3 Power waves
6.10.1.4 Reflectance function
6.10.2 Wave-flow building blocks
6.10.2.1 Circuit elements
6.10.2.2 Open-ended unit element
6.10.2.3 Short-circuited unit element
6.10.2.4 Matched termination
6.10.2.5 Resistive load
6.10.2.6 Short circuit
6.10.2.7 Open circuit
6.10.2.8 Voltage signal source
6.10.2.9 Circulator
6.10.3 Interconnection networks
6.10.3.1 Symmetric two-port adaptor
6.10.3.2 Series adaptors
6.10.3.3 Two-port series adaptor
6.10.3.4 Three-port series adaptor
6.10.3.5 Parallel adaptors
6.10.3.6 Two-port parallel adaptor
6.10.3.7 Three-port parallel adaptor
6.10.3.8 Direct interconnection of adaptors
6.10.4 Adaptor transformations
6.10.5 Resonance circuits
6.10.5.1 First-order circuits
6.10.5.2 Second-order series resonance circuit
6.10.5.3 Second-order parallel resonance circuit
6.10.5.4 Second-order Richards' structures
6.10.6 Parasitic oscillations
6.11 Ladder wave digital filters
6.12 Design of lattice wave digital filters
6.12.1 Bandpass lattice filters
6.12.1.1 Example
6.12.2 Lattice wave digital filter with a constant pole radius
6.12.2.1 Example
6.12.2.2 Half-band lattice wave digital filter
6.12.2.3 Example
6.12.3 Lattice filters with a pure delay branch
6.12.3.1 Half-band lattice filter with a pure delay
6.12.3.2 Example
6.13 Circulator-tree wave digital filters
6.14 Numerically equivalent state-space realization of wave digital filters
6.15 Computational properties of filter algorithms
6.15.1 Latency and throughput
6.15.2 Maximal sample rate
6.15.3 Cyclic scheduling
6.16 Architecture
6.17 Arithmetic operations
6.17.1 Addition and subtraction
6.17.1.1 Ripple-carry addition
6.17.1.2 Bit-serial addition and subtraction
6.17.1.3 Digit-serial addition and subtraction
6.17.2 Multiplication
6.17.2.1 Serial/parallel multipliers
6.17.2.2 Example
6.17.2.3 Constant multiplication
6.18 Composite arithmetic operations
6.18.1 Multiple-constant multiplication
6.18.2 Distributed arithmetic
6.18.2.1 Implementation of FIR filters using distributed arithmetic
6.18.2.2 Memory reduction techniques
6.19 Power reduction techniques
6.19.1 Power supply voltage scaling
6.19.2 Overall strategy
References
7 Multirate signal processing for software radio architectures
7.1 Introduction
7.2 The sampling process and theresampling'' process
7.3 Digital filters
7.4 Windowing
7.5 Basics on multirate filters
7.6 From single-channel down converter to standard down converter channelizer
7.6.1 From single channel up converter to standard up converter channelizer
7.7 Modifications of the standard down converter channelizer โ M:2 down converter channelizer
7.7.1 Modifications of the standard up converter channelizer โ 2:M up converter channelizer
7.8 Preliminaries on software-defined radios
7.9 Proposed architectures for software radios
7.9.1 Proposed digital down converter architecture
7.9.1.1 Postanalysis block and synthesis up converter channelizers
7.9.1.2 High-quality arbitrary interpolator
7.9.1.3 Nyquist filters
7.9.2 Digital down converter simulation results
7.9.3 Proposed up converter architecture
7.9.4 Digital up converter simulation results
7.10 Case study: enabling automated signal detection, segregation, and classification
7.10.1 Signal detection and segregation using the channelizer
7.10.2 Signal classification
Acknowledgment
7.11 Closing comments
Glossary
References
8 Modern transform design for practical audio/image/video coding applications
8.1 Introduction
8.2 Background and fundamentals
8.2.1 Notation
8.2.2 Transform fundamentals
8.2.3 Optimal orthogonal transform
8.2.4 Popular transforms in signal processing: DFT, WHT, DCT
8.2.5 The filter bank connection
8.2.6 The lapped transform connection
8.3 Design strategy
8.3.1 Desirable transform properties
8.4 Approximation approach via direct scaling
8.4.1 H.264 4 x4 transform design
8.4.2 Integer DCT design via the principle of dyadic symmetry
8.4.3 Direct scaling of a rotation angle
8.5 Approximation approach via structural design
8.5.1 Lifting step
8.5.2 Lifting-based approximation
8.5.3 Lossless color transform design
8.5.4 Integer DCT design
8.5.5 Lifting for complex-coefficient transforms
8.6 Wavelet filter design via spectral factorization
8.6.1 Wavelets and filter banks
8.6.2 Spectral factorization
8.6.2.1 5-/3-tap symmetric and 4-tap orthogonal wavelet filters
8.6.2.2 9-/7-tap symmetric and eight-tap orthogonal wavelet filters
8.6.3 Lifting and the wavelet transform
8.7 Higher-order design approach via optimization
8.7.1 General modular construction
8.7.2 Higher-order design with pre-/postprocessing operators
8.7.2.1 A simple example
8.7.2.2 More general solutions
8.7.2.3 HD photo or JPEG-XR transform
8.7.3 Adaptive decomposition design
8.7.3.1 Adaptive pre-/postprocessing support
8.7.3.2 Adaptive transform block size
8.7.4 Modulated design
8.8 Conclusion
References
9 Data representation: from multiscale transforms to neural networks
9.1 Introduction
9.1.1 An overview of multiscale transforms
9.1.2 Signal representation in functional bases
9.1.3 The continuous Fourier transform
9.1.4 Fourier series
9.1.5 The discrete Fourier transform
9.1.6 The windowed Fourier transform
9.2 Wavelets: a multiscale analysis tool
9.2.1 The wavelet transform
9.2.2 Inverting the wavelet transform
9.2.3 Wavelets and multiresolution analysis
9.2.3.1 Multiresolution analysis
9.2.3.2 Properties of wavelets
9.2.3.3 Vanishing moments
9.2.3.4 Regularity/smoothness
9.2.3.5 Wavelet symmetry
9.2.3.6 A filter bank view: implementation
9.2.3.7 Refining a wavelet basis: wavelet packet and local cosine bases
9.2.3.8 Wavelet packets
9.2.3.9 Basis search
9.2.4 Multiresolution applications in estimation problems
9.2.4.1 Signal estimation/denoising and modeling
9.2.4.2 Problem statement
9.2.4.3 The coding length criterion
9.2.4.4 Coding for worst-case noise
9.2.4.5 Numerical experiments
9.3 Curvelets and their applications
9.3.1 The continuous curvelet transform
9.3.1.1 Mother curvelets
9.3.1.2 The amplitude and angular windows
9.3.1.3 The curvelet atoms
9.3.1.4 The formal definition of the continuous curvelet transform
9.3.1.5 The properties of the curvelet transform
9.3.1.6 The reproducing formula
9.3.2 The discrete curvelet transform
9.3.2.1 The selection of windows
9.3.2.2 Constructing a tight frame of curvelet atoms
9.3.2.3 Implementation procedure for the discrete curvelet transform
9.3.3 Applications
9.4 Contourlets and their applications
9.4.1 The contourlet transform
9.4.1.1 Downsampling and upsampling operators
9.4.1.2 Polyphase decomposition
9.4.2 Laplacian pyramid frames
9.4.2.1 Analysis filter banks and synthesis filter banks of the Laplacian pyramid
9.4.2.2 The wavelet frame associated with the Laplacian pyramid
9.4.3 Directional filter banks
9.4.3.1 Resampling operators
9.4.3.2 The design of the quincunx filter banks
9.4.3.3 The design of the directional filter banks
9.4.3.4 Constructing the basis for the directional filter bank
9.4.4 Contourlet filter bank
9.4.4.1 The wavelet frames of Laplacian filter banks
9.4.4.2 The wavelet frames of directional filter banks
9.5 Shearlets and their applications
9.5.1 Composite wavelets
9.5.2 Shearlet atoms in the spatial plane
9.5.3 Shearlet atoms in the frequency domain
9.5.3.1 The mother shearlet window
9.5.3.2 The Fourier content of shearlet atoms
9.5.4 The continuous shearlet transform
9.5.4.1 Properties of a shearlet transform
9.5.4.2 Shearlets in the horizontal and vertical directions
9.5.5 Discrete shearlet atoms
9.5.6 Numerical implementation of the shearlet transform
9.5.7 Applications
9.5.7.1 Estimation of edge orientation
9.5.7.2 Feature classification
9.5.7.3 Edge detection
9.5.7.4 Image denoising
9.6 Incorporating wavelets into neural networks
9.6.1 Using wavelets in neural networks for signal processing
9.6.1.1 Wavelet neural networks
9.6.1.2 Deep convolutional framelets neural networks
9.6.1.2.1 Convolution framelets (Framelets)
9.6.1.2.2 Deep convolutional framelets neural networks
9.6.2 Using wavelets in neural networks for image processing
9.6.2.1 Cascading wavelets with CNN-based algorithms
9.6.2.1.1 Deep wavelet superresolution
9.6.2.1.2 Wavelet residual network
9.6.2.2 Deep convolutional framelets neural networks
9.6.2.3 Multilevel wavelet CNN
9.A
9.A.1 The z-transform
References
10 Frames in signal processing
10.1 Introduction
10.1.1 Notation
10.2 Basic concepts
10.2.1 The dual frame
10.2.2 Signal analysis and synthesis using frames
10.3 Relevant definitions
10.3.1 The frame operator
10.3.2 The inverse frame
10.3.3 Characterization of frames: basic property
10.4 Some computational remarks
10.4.1 Frames in discrete spaces
10.4.2 Finite vector spaces
10.5 Construction of frames from a prototype signal
10.5.1 Translation, modulation, and dilation operators
10.5.2 Common frame constructions
10.5.3 Frames of translates
10.5.4 Gabor frames
10.5.5 Wavelet frames
10.5.6 Finite vector spaces
10.6 Some remarks and highlights on applications
10.6.1 Signal analysis
10.6.2 Robust data transmission
10.6.3 Gaborgram
10.6.4 Inverse Gabor frame
10.6.5 Gabor frames in discrete spaces
10.6.6 Fast analysis and synthesis operators for Gabor frames
10.6.7 Time-frequency content analysis using frame expansions
10.7 Conclusion
References
11 Parametric estimation
11.1 Introduction
11.1.1 What is parametric estimation?
11.1.2 Parametric models for signals and systems
11.2 Preliminaries
11.2.1 Signals
11.2.2 Data-driven framework
11.2.3 Stochastic model and criteria
11.3 Parametric models for linear time-invariant systems
11.3.1 Autoregressive models
11.3.2 Moving average models
11.3.3 Autoregressive moving average models
11.3.4 Parametric modeling for system function approximation
11.4 Joint process estimation and sequential modeling
11.4.1 Batch joint process estimation
11.4.2 Segmented least-squares
11.4.3 Recursive estimation with RLS and LMS
11.5 Model order estimation
References
12 Adaptive filters
12.1 Introduction
12.1.1 Motivation โ acoustic echo cancelation
12.1.2 A quick tour of adaptive filtering
12.1.2.1 Posing the problem
12.1.2.2 Measuring how far we are from the solution
12.1.2.3 Choosing a structure for the filter
12.1.2.4 Searching for the solution
12.1.2.5 Trade-off between speed and precision
12.1.3 Applications
12.1.3.1 Interference cancelation
12.1.3.2 System identification
12.1.3.3 Prediction
12.1.3.4 Inverse system identification
12.1.3.5 A common formulation
12.2 Optimum filtering
12.2.1 Linear least-mean squares estimation
12.2.1.1 Orthogonality condition
12.2.1.2 Implicit versus physical models
12.2.1.3 Undermodeling
12.2.1.4 Zero and nonzero mean variables
12.2.1.5 Sufficiently rich signals
12.2.1.6 Examples
12.2.2 Complex variables and multichannel filtering
12.2.2.1 Widely linear complex least-mean squares
12.2.2.2 Linear complex least-mean squares
12.3 Stochastic algorithms
12.3.1 LMS algorithm
12.3.1.1 A deterministic approach for the stability of LMS
12.3.2 Normalized LMS algorithm
12.3.3 RLS algorithm
12.3.3.1 Practical implementation of RLS
12.3.4 Comparing convergence rates
12.4 Statistical analysis
12.4.1 Data model
12.4.2 Relating the autocorrelation matrix of the weight error vector to the EMSE and the MSD
12.4.3 Statistical analysis of the LMS algorithm
12.4.4 A unified statistical analysis
12.4.4.1 A general update equation
12.4.4.2 Alternative analysis methods
12.4.4.3 Analysis with the traditional method
12.4.4.4 Steady-state analysis with the energy conservation method
12.4.4.5 Relation between the results obtained with both analysis methods
12.4.4.6 Optimal step size for tracking
12.5 Extensions and current research
12.5.1 Finite precision arithmetic
12.5.1.1 Finite precision effects in LMS
12.5.1.2 Finite precision effects in RLS
12.5.1.3 DCD-RLS
DCD minimization of quadratic functions
DCD-RLS
12.5.2 Regularization
12.5.3 Variable step size
12.5.4 Non-Gaussian noise and robust filters
12.5.5 Blind equalization
12.5.6 Subband and transform-domain adaptive filters
12.5.7 Affine projections algorithm
12.5.8 Cooperative estimation
12.5.8.1 Combinations of adaptive filters
12.5.8.2 Distributed adaptive filtering
12.5.9 Adaptive IIR filters
12.5.10 Set membership and projections onto convex sets
12.5.11 Adaptive filters with constraints
12.5.12 Reduced-rank adaptive filters
12.5.13 Kernel adaptive filtering
References
13 Machine learning
13.1 Introduction
13.2 Learning concepts
13.2.1 Data concepts
13.2.2 Kinds of learning
13.2.3 Underfitting and overfitting
13.3 Unsupervised learning
13.3.1 Introduction
13.3.2 K-Means algorithm
13.3.3 Self-organizing map
13.3.4 Mean-shift clustering
13.3.5 MNIST example
13.4 Supervised learning
13.4.1 Perceptron
13.4.2 Fully connected neural networks
13.4.2.1 Local error features
13.4.3 Regression
13.4.4 A regression example
13.4.5 Classification
13.4.6 A classification example
13.4.7 EMNIST letter dataset
13.4.8 Dropout
13.4.9 Number of flops
13.4.10 Convolutional neural networks
13.4.11 Recurrent neural networks
13.4.11.1 Standard RNN
13.4.11.2 Activation functions
13.4.11.3 Long short-term memory
13.4.11.4 Types of RNNs
13.4.11.5 Using RNN
13.4.11.6 An example with RNN using LSTM
13.4.12 Support vector machines
13.4.12.1 The optimization problem
13.4.12.2 Hard-margin SVM
13.4.12.3 Support vectors
13.4.12.4 Nonlinear transformations
13.4.12.5 Mercer kernels
13.4.12.6 Some kernel functions
13.5 Ensemble learning
13.5.1 Random forest
13.5.2 Boosting
13.6 Deep learning
13.6.1 Large dataset corpora
13.6.2 Nonlinear activation function
13.6.3 Network weight initialization
13.6.4 Loss function
13.6.5 Learning algorithm
13.6.6 Network regularization
13.6.7 Deep learning frameworks
13.6.8 Autoencoders
13.6.9 Adversarial training
13.6.9.1 White-box attack
13.6.9.2 Black-box attack
13.7 CNN visualization
13.7.1 Saliency map (vanilla and guided backpropagation)
13.8 Deep reinforcement learning
13.9 Current trends
13.9.1 Transfer learning
13.9.2 Learning on graphs
13.9.3 Interpretable artificial intelligence
13.9.4 Federated learning
13.9.5 Causal machine learning
13.10 Concluding remarks
13.11 Appendix: RNN's gradient derivations
13.11.1 Gradient derivations of the standard RNN
13.11.2 Gradient derivations for the LSTM scheme
References
14 A primer on graph signal processing
14.1 The case for GSP
14.2 Fundamentals of graph theory
14.3 Graph signals and systems
14.3.1 Graph signals
14.3.2 Inferring graphs
14.3.3 Graph shift operators
14.4 Graph Fourier transform
14.4.1 Adjacency-based graph Fourier transform
14.4.2 Laplacian-based graph Fourier transform
14.5 Graph filtering
14.5.1 Convolution and filtering
Convolution between graph signals
Impulse graph signal
Graph filters
14.5.2 Graph filter design
Least-squares approximation
Chebyshev polynomial approximation
JacksonโChebyshev polynomial approximation
14.6 Down- and upsampling graph signals
14.6.1 Band-limited graph signals
Sampling and interpolation operators
Band-limitedness
Conditions for perfect reconstruction
14.6.2 Approximately band-limited GSs
14.6.3 Optimal sampling strategies
14.6.4 Interpolation of band-limited GSs
14.7 Examples and applications
14.7.1 On-line filtering of network signals
14.7.2 Image processing
14.7.3 3D point cloud processing
14.7.4 Spatiotemporal analysis and visualization
14.8 A short history of GSP
References
15 Tensor methods in deep learning
15.1 Introduction
15.2 Preliminaries on matrices and tensors
15.2.1 Notation
15.2.2 Transforming tensors into matrices and vectors
15.2.3 Matrix and tensor products
15.2.4 Tensor diagrams
15.3 Tensor decomposition
15.3.1 Canonical-polyadic decomposition
15.3.2 Tucker decomposition
15.3.3 Tensor-train
15.3.4 Tensor regression
15.4 Tensor methods in deep learning architectures
15.4.1 Parameterizing fully connected layers
15.4.2 Tensor contraction layers
15.4.3 Tensor regression layers
15.4.4 Parameterizing convolutional layers
15.4.4.1 1 x1 convolutions
15.4.4.2 Kruskal convolutions
15.4.4.3 Tucker convolutions
15.4.4.4 Multistage compression
15.4.4.5 General N-dimensional separable convolutions and transduction
15.4.4.6 Parameterizing full networks
15.4.4.7 Parameterizing recurrent neural networks
15.4.4.8 Preventing degeneracy in factorized convolutions
15.4.5 Tensor structures in polynomial networks and attention mechanisms
15.4.6 Tensor structures in generative adversarial networks
15.4.6.1 Multilinear factorization of latent space
15.4.6.1.1 Style code factorization
15.4.6.1.2 Feature map factorization
15.4.7 Robustness of deep networks
15.4.7.1 Tensor dropout
15.4.7.2 Defensive tensorization
15.4.7.3 Tensor-Shield
15.4.8 Applications
15.5 Tensor methods in quantum machine learning
15.5.1 Quantum mechanics
15.5.2 Matrix product states for quantum simulation
15.5.3 Factorized quantum circuit simulation with TensorLy-Quantum
15.6 Software
15.7 Limitations and guidelines
15.7.1 Choosing the appropriate decomposition
15.7.2 Rank selection
15.7.3 Training pitfalls and numerical issues
References
16 Nonconvex graph learning: sparsity, heavy tails, and clustering
16.1 Introduction
16.1.1 Learning undirected graphs
16.1.2 Majorization-minimization: a brief visit
16.1.3 Alternating direction method of multipliers: a brief visit
16.2 Sparse graphs
16.3 Heavy-tail graphs
16.4 Clustering
16.4.1 Soft-clustering via bipartite graphs
16.5 Conclusion
Acknowledgments
References
17 Dictionaries in machine learning
17.1 Data-driven AI via dictionary learning for sparse signal processing
17.2 Sparsity and sparse representations
17.2.1 Something rather than anything'' โ sparse structure in the natural world
17.2.2 Sparsity:meaning'' versus information as choice of possibilities''
17.2.3 The linear sparse generative model
17.2.4 Sufficient conditions for a unique sparse solution
17.2.5 Sparsity classes
17.2.6 Bases, frames, dictionaries, and sparsity classes
17.3 Sparse signal processing
17.3.1 Measures of sparsity and diversity
17.3.2 Sparsity-inducing optimizations of diversity measures
17.3.3 The stochastic model and sparse Bayesian learning
17.4 Dictionary learning I โ basic models and algorithms
17.4.1 Dictionary learning from observational data โ the setup
17.4.2 Dictionary learning as a generalization of K-means clustering and vector quantization
17.4.3 Dictionary learning as a generalization of PCA and FA
17.4.4 Dictionary learning using the Method of Directions
17.4.5 Dictionary learning using K-SVD
17.5 Dictionary learning II โ the hierarchical/empirical Bayes approach
17.5.1 Hierarchical model for sparse Bayesian learning โ Type I versus Type II estimation
17.5.2 Dictionary learning in the SBL framework
17.5.2.1 Scalable SBL dictionary learning
17.5.3 Information theory, generative models, andSparseland''
17.5.4 Dictionary learning for conceptual understanding of the world
17.6 Nonnegative matrix factorization in dictionary learning
17.7 Dictionaries, data manifold learning, and geometric multiresolution analysis
17.8 Hard clustering and classification in dictionary learning
17.8.1 Clustering
17.8.2 Classification using residual errors
17.9 Multilayer dictionary learning and classification
17.10 Kernel dictionary learning
17.11 Conclusion
17.A Derivation and properties of the K-SVD algorithm
17.B Derivation of the SBL EM update equation
17.C SBL dictionary learning algorithm
17.D Mathematical background for kernelizing dictionary learning
References
Index
Back Cover
๐ SIMILAR VOLUMES
<p>This first volume of a five volume set, edited and authored by world leading experts, gives a review of the principles, methods and techniques of important and emerging research topics and technologies in machine learning and advanced signal processing theory.</p> <p>With this reference source yo
<p>This first volume, edited and authored by world leading experts, gives a review of the principles, methods and techniques of important and emerging research topics and technologies in machine learning and advanced signal processing theory.</p> <p>With this reference source you will:</p> <ul><li>Q
This first volume, edited and authored by world leading experts, gives a review of the principles, methods and techniques of important and emerging research topics and technologies in machine learning and advanced signal processing theory.
Let us flash back to the 1970s when the editors-in-chief of this e-reference were graduate students. One of the time-honored traditions then was to visit the libraries several times a week to keep track of the latest research findings. After your advisor and teachers, the librarians were your best f
<p><b>Explore cutting edge techniques at the forefront of electroencephalogram research and artificial intelligence from leading voices in the field </b> </p><p>The newly revised Second Edition of EEG Signal Processing and Machine Learning delivers an inclusive and thorough exploration of new techni