Deep Learning for Natural Language Processing
β Scribed by Stephan Raaijmakers
- Publisher
- Manning Publications
- Year
- 2022
- Tongue
- English
- Leaves
- 296
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
Humans do a great job of reading text, identifying key ideas, summarizing, making connections, and other tasks that require comprehension and context. Recent advances in deep learning make it possible for computer systems to achieve similar results.
Deep Learning for Natural Language Processing teaches you to apply deep learning methods to natural language processing (NLP) to interpret and use text effectively. In this insightful book, NLP expert Stephan Raaijmakers distills his extensive knowledge of the latest state-of-the-art developments in this rapidly emerging field.
Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications.
β¦ Table of Contents
Deep Learning for NLP
brief contents
contents
preface
acknowledgments
about this book
Who should read this book
How this book is organized: A road map
About the code
liveBook discussion forum
about the author
about the cover illustration
Part 1 Introduction
1 Deep learning for NLP
1.1 A selection of machine learning methods for NLP
1.1.1 The perceptron
1.1.2 Support vector machines
1.1.3 Memory-based learning
1.2 Deep learning
1.3 Vector representations of language
1.3.1 Representational vectors
1.3.2 Operational vectors
1.4 Vector sanitization
1.4.1 The hashing trick
1.4.2 Vector normalization
Summary
2 Deep learning and language: The basics
2.1 Basic architectures of deep learning
2.1.1 Deep multilayer perceptrons
2.1.2 Two basic operators: Spatial and temporal
2.2 Deep learning and NLP: A new paradigm
Summary
3 Text embeddings
3.1 Embeddings
3.1.1 Embedding by direct computation: Representational embeddings
3.1.2 Learning to embed: Procedural embeddings
3.2 From words to vectors: Word2Vec
3.3 From documents to vectors: Doc2Vec
Summary
Part 2 Deep NLP
4 Textual similarity
4.1 The problem
4.2 The data
4.2.1 Authorship attribution and verification data
4.3 Data representation
4.3.1 Segmenting documents
4.3.2 Word-level information
4.3.3 Subword-level information
4.4 Models for measuring similarity
4.4.1 Authorship attribution
4.4.2 Verifying authorship
Summary
5 Sequential NLP
5.1 Memory and language
5.1.1 The problem: Question Answering
5.2 Data and data processing
5.3 Question Answering with sequential models
5.3.1 RNNs for Question Answering
5.3.2 LSTMs for Question Answering
5.3.3 End-to-end memory networks for Question Answering
Summary
6 Episodic memory for NLP
6.1 Memory networks for sequential NLP
6.2 Data and data processing
6.2.1 PP-attachment data
6.2.2 Dutch diminutive data
6.2.3 Spanish part-of-speech data
6.3 Strongly supervised memory networks: Experiments and results
6.3.1 PP-attachment
6.3.2 Dutch diminutives
6.3.3 Spanish part-of-speech tagging
6.4 Semi-supervised memory networks
6.4.1 Semi-supervised memory networks: Experiments and results
Summary
Part 3 Advanced topics
7 Attention
7.1 Neural attention
7.2 Data
7.3 Static attention: MLP
7.4 Temporal attention: LSTM
7.5 Experiments
7.5.1 MLP
7.5.2 LSTM
Summary
8 Multitask learning
8.1 Introduction to multitask learning
8.2 Multitask learning
8.3 Multitask learning for consumer reviews: Yelp and Amazon
8.3.1 Data handling
8.3.2 Hard parameter sharing
8.3.3 Soft parameter sharing
8.3.4 Mixed parameter sharing
8.4 Multitask learning for Reuters topic classification
8.4.1 Data handling
8.4.2 Hard parameter sharing
8.4.3 Soft parameter sharing
8.4.4 Mixed parameter sharing
8.5 Multitask learning for part-of-speech tagging and named-entity recognition
8.5.1 Data handling
8.5.2 Hard parameter sharing
8.5.3 Soft parameter sharing
8.5.4 Mixed parameter sharing
Summary
9 Transformers
9.1 BERT up close: Transformers
9.2 Transformer encoders
9.2.1 Positional encoding
9.3 Transformer decoders
9.4 BERT: Masked language modeling
9.4.1 Training BERT
9.4.2 Fine-tuning BERT
9.4.3 Beyond BERT
Summary
10 Applications of Transformers: Hands-on with BERT
10.1 Introduction: Working with BERT in practice
10.2 A BERT layer
10.3 Training BERT on your data
10.4 Fine-tuning BERT
10.5 Inspecting BERT
10.5.1 Homonyms in BERT
10.6 Applying BERT
Summary
bibliography
index
Numerics
A
B
C
D
E
F
G
H
I
K
L
M
N
O
P
Q
R
S
T
U
V
W
Y
Z
Deep Learning for NLP-back
π SIMILAR VOLUMES
Deep learning methods are achieving state-of-the-art results on challenging machine learning problems such as describing photos and translating text from one language to another. In this new laser-focused Ebook written in the friendly Machine Learning Mastery style that youβre used to, finally cu
Key Features β Implement Machine Learning and Deep Learning techniques for efficient natural language processing β Get started with NLTK and implement NLP in your applications with ease β Understand and interpret human languages with the power of text analysis via Python Book Description This
Humans have the most advanced method of communication, which is known as natural language. While humans can use computers to send voice and text messages to each other, computers do not innately know how to process natural language. In recent years, deep learning has primarily transformed the perspe
<p><b>Gain the knowledge of various deep neural network architectures and their application areas to conquer your NLP issues.</b></p> Key Features <li>Gain insights into the basic building blocks of natural language processing </li> <li>Learn how to select the best deep neural network to solve your