𝔖 Scriptorium
✦   LIBER   ✦

πŸ“

Deep Learning and Scientific Computing with R torch (Chapman & Hall/CRC The R Series)

✍ Scribed by Sigrid Keydana


Publisher
Chapman and Hall/CRC
Year
2023
Tongue
English
Leaves
414
Edition
1
Category
Library

⬇  Acquire This Volume

No coin nor oath required. For personal study only.

✦ Synopsis


torch is an R port of PyTorch, one of the two most-employed deep learning frameworks in industry and research. It is also an excellent tool to use in scientific computations. It is written entirely in R and C/C++.

Though still "young" as a project, R torch already has a vibrant community of users and developers. Experience shows that torch users come from a broad range of different backgrounds. This book aims to be useful to (almost) everyone. Globally speaking, its purposes are threefold:

- Provide a thorough introduction to torch basics – both by carefully explaining underlying concepts and ideas, and showing enough examples for the reader to become "fluent" in torch.

- Again with a focus on conceptual explanation, show how to use torch in deep-learning applications, ranging from image recognition over time series prediction to audio classification.

- Provide a concepts-first, reader-friendly introduction to selected scientific-computation topics (namely, matrix computations, the Discrete Fourier Transform, and wavelets), all accompanied by torch code you can play with.

Deep Learning and Scientific Computing with R torch is written with first-hand technical expertise and in an engaging, fun-to-read way.

✦ Table of Contents


Cover
Half Title
Series Page
Title Page
Copyright Page
Contents
List of Figures
Preface
Author Biography
I. Getting Familiar with Torch
1. Overview
2. On torch, and How to Get It
2.1. In torch World
2.2. Installing and Running torch
3. Tensors
3.1. What’s in a Tensor?
3.2. Creating Tensors
3.2.1. Tensors from values
3.2.2. Tensors from specifications
3.2.3. Tensors from datasets
3.3. Operations on Tensors
3.3.1. Summary operations
3.4. Accessing Parts of a Tensor
3.4.1. β€œThink R”
3.5. Reshaping Tensors
3.5.1. Zero-copy reshaping vs. reshaping with copy
3.6. Broadcasting
3.6.1. Broadcasting rules
4. Autograd
4.1. Why Compute Derivatives?
4.2. Automatic Differentiation Example
4.3. Automatic Differentiation with torch autograd
5. Function Minimization with autograd
5.1. An Optimization Classic
5.2. Minimization from Scratch
6. A Neural Network from Scratch
6.1. Idea
6.2. Layers
6.3. Activation Functions
6.4. Loss Functions
6.5. Implementation
6.5.1. Generate random data
6.5.2. Build the network
6.5.3. Train the network
7. Modules
7.1. Built-in nn_module()s
7.2. Building up a Model
7.2.1. Models as sequences of layers: nn_sequential() index{nn_sequential()}
7.2.2. Models with custom logic
8. Optimizers
8.1. Why Optimizers?
8.2. Using built-in torch Optimizers
8.3. Parameter Update Strategies
8.3.1. Gradient descent (a.k.a. steepest descent, a.k.a. stochastic gradient descent (SGD))
8.3.2. Things that matter
8.3.3. Staying on track: Gradient descent with momentum
8.3.4. Adagrad
8.3.5. RMSProp
8.3.6. Adam
9. Loss Functions
9.1. torch Loss Functions
9.2. What Loss Function Should I Choose?
9.2.1. Maximum likelihood
9.2.2. Regression
9.2.3. Classification
10. Function Minimization with L-BFGS
10.1. Meet L-BFGS
10.1.1. Changing slopes
10.1.2. Exact Newton method
10.1.3. Approximate Newton: BFGS and L-BFGS
10.1.4. Line search
10.2. Minimizing the Rosenbrock Function with optim_lbfgs()
10.2.1. optim_lbfgs() default behavior
10.2.2. optim_lbfgs() with line search
11. Modularizing the Neural Network
11.1. Data
11.2. Network
11.3. Training
11.4. What’s to Come
II. Deep Learning with torch
12. Overview
13. Loading Data
13.1. Data vs. dataset() vs. dataloader() – What’s the Difference?
13.2. Using dataset()s
13.2.1. A self-built dataset()
13.2.2. tensor_dataset()
13.2.3. torchvision::mnist_dataset()
13.3. Using dataloader()s
14. Training with luz
14.1. Que haya luz – Que haja luz – Let there be Light
14.2. Porting the Toy Example
14.2.1. Data
14.2.2. Model
14.2.3. Training
14.3. A More Realistic Scenario
14.3.1. Integrating training, validation, and test
14.3.2. Using callbacks to β€œhook” into the training process
14.3.3. How luz helps with devices
14.4. Appendix: A Train-Validate-Test Workflow Implemented by Hand
15. A First Go at Image Classification
15.1. What does It Take to Classify an Image?
15.2. Neural Networks for Feature Detection and Feature Emergence
15.2.1. Detecting low-level features with cross-correlation
15.2.2. Build up feature hierarchies
15.3. Classification on Tiny Imagenet
15.3.1. Data pre-processing
15.3.2. Image classification from scratch
16. Making Models Generalize
16.1. The Royal Road: more – and More Representative! – Data
16.2. Pre-processing Stage: Data Augmentation
16.2.1. Classic data augmentation
16.2.2. Mixup
16.3. Modeling Stage: Dropout and Regularization
16.3.1. Dropout
16.3.2. Regularization
16.4. Training Stage: Early Stopping
17. Speeding up Training
17.1. Batch Normalization
17.2. Dynamic Learning Rates
17.2.1. Learning rate finder
17.2.2. Learning rate schedulers
17.3. Transfer Learning
18. Image Classification, Take Two: Improving Performance
18.1. Data Input (Common for all)
18.2. Run 1: Dropout
18.3. Run 2: Batch Normalization
18.4. Run 3: Transfer Learning
19. Image Segmentation
19.1. Segmentation vs. Classification
19.2. U-Net, a β€œclassic” in image segmentation
19.3. U-Net – a torch implementation
19.3.1. Encoder
19.3.2. Decoder
19.3.3. The β€œU”
19.3.4. Top-level module
19.4. Dogs and Cats
20. Tabular Data
20.1. Types of Numerical Data, by Example
20.2. A torch dataset for Tabular Data
20.3. Embeddings in Deep Learning: The Idea
20.4. Embeddings in deep learning: Implementation
20.5. Model and Model Training
20.6. Embedding-generated Representations by Example
21. Time Series
21.1. Deep Learning for Sequences: The Idea
21.2. A Basic Recurrent Neural Network
21.2.1. Basic rnn_cell()
21.2.2. Basic rnn_module()
21.3. Recurrent Neural Networks in torch
21.4. RNNs in Practice: GRU and LSTM
21.5. Forecasting Electricity Demand
21.5.1. Data inspection
21.5.2. Forecasting the very next value
21.5.3. Forecasting multiple time steps ahead
22. Audio Classification
22.1. Classifying Speech Data
22.2. Two Equivalent Representations
22.3. Combining Representations: The Spectrogram
22.4. Training a Model for Audio Classification
22.4.1. Baseline setup: Training a convnet on spectrograms
22.4.2. Variation one: Use a Mel-scale spectrogram instead
22.4.3. Variation two: Complex-valued spectograms
III. Other Things to do with torch: Matrices, Fourier Transform, and Wavelets
23. Overview
24. Matrix Computations: Least-squares Problems
24.1. Five Ways to do Least Squares
24.2. Regression for Weather Prediction
24.2.1. Least squares (I): Setting expectations with lm()
24.2.2. Least squares (II): Using linalg_lstsq()
24.2.3. Interlude: What if we hadn’t standardized the data?
24.2.4. Least squares (III): The normal equations
24.2.5. Least squares (IV): Cholesky decomposition
24.2.6. Least squares (V): LU factorization
24.2.7. Least squares (VI): QR factorization
24.2.8. Least squares (VII): Singular Value Decomposition (SVD)
24.2.9. Checking execution times
24.3. A Quick Look at Stability
25. Matrix Computations: Convolution
25.1. Why Convolution?
25.2. Convolution in One Dimension
25.2.1. Two ways to think about convolution
25.2.2. Implementation
25.3. Convolution in Two Dimensions
25.3.1. How it works (output view)
25.3.2. Implementation
26. Exploring the Discrete Fourier Transform (DFT)
26.1. Understanding the Output of torch_fft_fft()
26.1.1. Starting point: A cosine of frequency 1
26.1.2. Reconstructing the magic
26.1.3. Varying frequency
26.1.4. Varying amplitude
26.1.5. Adding phase
26.1.6. Superposition of sinusoids
26.2. Coding the DFT
26.3. Fun with sox
27. The Fast Fourier Transform (FFT)
27.1. Some Terminology
27.2. Radix-2 decimation-in-time(DIT) walkthrough
27.2.1. The main idea: Recursive split
27.2.2. One further simplification
27.3. FFT as Matrix Factorization
27.4. Implementing the FFT
27.4.1. DFT, the β€œloopy” way
27.4.2. DFT, vectorized
27.4.3. Radix-2 decimation in time FFT, recursive
27.4.4. Radix-2 decimation in time FFT by matrix factorization
27.4.5. Radix-2 decimation in time FFT, optimized for vectorization
27.4.6. Checking against torch_fft_fft()
27.4.7. Comparing performance
27.4.8. Making use of Just-in-Time (JIT) compilation
28. Wavelets
28.1. Introducing the Morlet Wavelet
28.2. The roles of 𝐾 and πœ”π‘Ž
28.3. Wavelet Transform: A Straightforward Implementation
28.4. Resolution in Time versus in Frequency
28.5. How is this Different from a Spectrogram?
28.6. Performing the Wavelet Transform in the Fourier Domain
28.7. Creating the Wavelet Diagram
28.8. A Real-world Example: Chaffinch’s Song
References
Index


πŸ“œ SIMILAR VOLUMES


Deep Learning and Scientific Computing w
✍ SigridKeydana πŸ“‚ Library πŸ“… 2023 πŸ› CRC Press LLC 🌐 English

torch is an R port of PyTorch, one of the two most-employed deep learning frameworks in industry and research. It is also an excellent tool to use in scientific computations. It is written entirely in R and C/C++. Though still "young" as a project, R torch already has a vibrant community of users

Behavior Analysis with Machine Learning
✍ Enrique Garcia Ceja πŸ“‚ Library πŸ“… 2021 πŸ› Chapman and Hall/CRC 🌐 English

<p><b>Behavior Analysis with Machine Learning Using R </b>introduces machine learning and deep learning concepts and algorithms applied to a diverse set of behavior analysis problems. It focuses on the practical aspects of solving such problems based on data collected from sensors or stored in elect

Learn R: As a Language (Chapman & Hall/C
✍ Pedro J. Aphalo πŸ“‚ Library πŸ“… 2020 πŸ› Chapman and Hall/CRC 🌐 English

<p>Learning a computer language like R can be either frustrating, fun, or boring. Having fun requires challenges that wake up the learner’s curiosity but also provide an emotional reward on overcoming them. This book is designed so that it includes smaller and bigger challenges, in what I call playg

Event History Analysis with R (Chapman &
✍ GΓΆran BrostrΓΆm πŸ“‚ Library πŸ“… 2021 πŸ› Chapman and Hall/CRC 🌐 English

<p>With an emphasis on social science applications, Event History Analysis with R, Second Edition, presents an introduction to survival and event history analysis using real-life examples. Since publication of the first edition, focus in the field has gradually shifted towards the analysis of large

Analyzing Baseball Data with R (Chapman
✍ Jim Albert, Benjamin S. Baumer, Max Marchi πŸ“‚ Library πŸ“… 2024 πŸ› Chapman and Hall/CRC 🌐 English

<p><span>β€œOur community has continued to grow exponentially, thanks to those who inspire the next generation. And inspiring the next generation is what the authors of Analyzing Baseball Data with R are doing. They are setting the career path for still thousands more. We all need some sort of kicksta

Advanced R Solutions (Chapman & Hall/CRC
✍ Malte Grosser, Henning Bumann, Hadley Wickham πŸ“‚ Library πŸ“… 2021 πŸ› Chapman and Hall/CRC 🌐 English

<p>This book offers solutions to all 284 exercises in <b><i>Advanced R, Second Edition</i></b>. All the solutions have been carefully documented and made to be as clear and accessible as possible. Working through the exercises and their solutions will give you a deeper understanding of a variety of