𝔖 Scriptorium
✦   LIBER   ✦

📁

Deep Neural Networks in a Mathematical Framework (SpringerBriefs in Computer Science)

✍ Scribed by Anthony L. Caterini, Dong Eui Chang


Publisher
Springer
Year
2018
Tongue
English
Leaves
95
Category
Library

⬇  Acquire This Volume

No coin nor oath required. For personal study only.

✦ Synopsis


This SpringerBrief describes how to build a rigorous end-to-end mathematical framework for deep neural networks. The authors provide tools to represent and describe neural networks, casting previous results in the field in a more natural light. In particular, the authors derive gradient descent algorithms in a unified way for several neural network structures, including multilayer perceptrons, convolutional neural networks, deep autoencoders and recurrent neural networks. Furthermore, the authors developed framework is both more concise and mathematically intuitive than previous representations of neural networks.

This SpringerBrief is one step towards unlocking the black box of Deep Learning. The authors believe that this framework will help catalyze further discoveries regarding the mathematical properties of neural networks.This SpringerBrief is accessible not only to researchers, professionals and students working and studying in the field of deep learning, but also to those outside of the neutral network community.

✦ Table of Contents


Preface
Contents
Acronyms
1 Introduction and Motivation
1.1 Introduction to Neural Networks
1.1.1 Brief History
1.1.2 Tasks Where Neural Networks Succeed
1.2 Theoretical Contributions to Neural Networks
1.2.1 Universal Approximation Properties
1.2.2 Vanishing and Exploding Gradients
1.2.3 Wasserstein GAN
1.3 Mathematical Representations
1.4 Book Layout
References
2 Mathematical Preliminaries
2.1 Linear Maps, Bilinear Maps, and Adjoints
2.2 Derivatives
2.2.1 First Derivatives
2.2.2 Second Derivatives
2.3 Parameter-Dependent Maps
2.3.1 First Derivatives
2.3.2 Higher-Order Derivatives
2.4 Elementwise Functions
2.4.1 Hadamard Product
2.4.2 Derivatives of Elementwise Functions
2.4.3 The Softmax and Elementwise Log Functions
2.5 Conclusion
References
3 Generic Representation of Neural Networks
3.1 Neural Network Formulation
3.2 Loss Functions and Gradient Descent
3.2.1 Regression
3.2.2 Classification
3.2.3 Backpropagation
3.2.4 Gradient Descent Step Algorithm
3.3 Higher-Order Loss Function
3.3.1 Gradient Descent Step Algorithm
3.4 Conclusion
References
4 Specific Network Descriptions
4.1 Multilayer Perceptron
4.1.1 Formulation
4.1.2 Single-Layer Derivatives
4.1.3 Loss Functions and Gradient Descent
4.2 Convolutional Neural Networks
4.2.1 Single Layer Formulation
Cropping and Embedding Operators
Convolution Operator
Max-Pooling Operator
The Layerwise Function
4.2.2 Multiple Layers
4.2.3 Single-Layer Derivatives
4.2.4 Gradient Descent Step Algorithm
4.3 Deep Auto-Encoder
4.3.1 Weight Sharing
4.3.2 Single-Layer Formulation
4.3.3 Single-Layer Derivatives
4.3.4 Loss Functions and Gradient Descent
4.4 Conclusion
References
5 Recurrent Neural Networks
5.1 Generic RNN Formulation
5.1.1 Sequence Data
5.1.2 Hidden States, Parameters, and Forward Propagation
5.1.3 Prediction and Loss Functions
5.1.4 Loss Function Gradients
Prediction Parameters
Real-Time Recurrent Learning
Backpropagation Through Time
5.2 Vanilla RNNs
5.2.1 Formulation
5.2.2 Single-Layer Derivatives
5.2.3 Backpropagation Through Time
5.2.4 Real-Time Recurrent Learning
Evolution Equation
Loss Function Derivatives
Gradient Descent Step Algorithm
5.3 RNN Variants
5.3.1 Gated RNNs
5.3.2 Bidirectional RNNs
5.3.3 Deep RNNs
5.4 Conclusion
References
6 Conclusion and Future Work
References
Glossary


📜 SIMILAR VOLUMES


Deep Neural Networks in a Mathematical F
✍ Anthony L. Caterini; Dong Eui Chang 📂 Library 📅 2018 🏛 Springer 🌐 English

This SpringerBrief describes how to build a rigorous end-to-end mathematical framework for deep neural networks. The authors provide tools to represent and describe neural networks, casting previous results in the field in a more natural light. In particular, the authors derive gradient descent algo

Neural Networks in a Softcomputing Frame
✍ K. -L. Du PhD, M. N. S. Swamy PhD, D.Sc (Eng) (auth.) 📂 Library 📅 2006 🏛 Springer-Verlag London 🌐 English

<p><P>Conventional model-based data processing methods are computationally expensive and require experts’ knowledge for the modelling of a system; neural networks provide a model-free, adaptive, parallel-processing solution. <EM>Neural Networks in a Softcomputing Framework </EM>presents a thorough r

Neural Networks in a Softcomputing Frame
✍ Ke-Lin Du, M.N.S. Swamy 📂 Library 📅 2006 🏛 Springer 🌐 English

Conventional model-based data processing methods are computationally expensive and require experts knowledge for the modelling of a system; neural networks provide a model-free, adaptive, parallel-processing solution. Neural Networks in a Softcomputing Framework presents a thorough review of the mos

A Primer on Generative Adversarial Netwo
✍ Sanaa Kaddoura 📂 Library 🏛 Springer 🌐 English

<span>This book is meant for readers who want to understand GANs without the need for a strong mathematical background. Moreover, it covers the practical applications of GANs, making it an excellent resource for beginners. </span><span>A Primer on Generative Adversarial Networks</span><span> is suit

A Primer on Generative Adversarial Netwo
✍ Sanaa Kaddoura 📂 Library 🏛 Springer 🌐 English

<span>This book is meant for readers who want to understand GANs without the need for a strong mathematical background. Moreover, it covers the practical applications of GANs, making it an excellent resource for beginners. </span><span>A Primer on Generative Adversarial Networks</span><span> is suit