Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs)
Neural Networks with Model Compression
โ Scribed by Baochang Zhang; Tiancheng Wang; Sheng Xu; David Doermann
- Publisher
- Springer Nature Singapore
- Year
- 2024
- Tongue
- English
- Leaves
- 269
- Category
- Library
No coin nor oath required. For personal study only.
โฆ Synopsis
Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.
โฆ Table of Contents
Cover
Front Matter
1. Introduction
2. Binary Neural Networks
3. Binary Neural Architecture Search
4. Quantization of Neural Networks
5. Network Pruning
6. Applications
๐ SIMILAR VOLUMES
Studies of the evolution of animal signals and sensory behaviour have more recently shifted from considering 'extrinsic' (environmental) determinants to 'intrinsic' (physiological) ones. The drive behind this change has been the increasing availability of neural network models. With contributions fr
Studies of the evolution of animal signals and sensory behaviour have more recently shifted from considering 'extrinsic' (environmental) determinants to 'intrinsic' (physiological) ones. The drive behind this change has been the increasing availability of neural network models. With contributions fr
Studies of the evolution of animal signals and sensory behaviour have more recently shifted from considering 'extrinsic' (environmental) determinants to 'intrinsic' (physiological) ones. The drive behind this change has been the increasing availability of neural network models. With contributions fr
Research in neural networks has escalated dramatically in the last decade, acquiring along the way terms and concepts, such as learning, memory, perception, recognition, which are the basis of neuropsychology. Nevertheless, for many, neural modelling remains controversial in its purported ability to