𝔖 Scriptorium
✦   LIBER   ✦

📁

Natural Language Processing and Chinese Computing: 10th CCF International Conference, NLPCC 2021, Qingdao, China, October 13–17, 2021, Proceedings, Part I (Lecture Notes in Computer Science, 13028)

✍ Scribed by Lu Wang (editor), Yansong Feng (editor), Yu Hong (editor), Ruifang He (editor)


Publisher
Springer
Year
2021
Tongue
English
Leaves
861
Edition
1st ed. 2021
Category
Library

⬇  Acquire This Volume

No coin nor oath required. For personal study only.

✦ Synopsis


This two-volume set of LNAI 13028 and LNAI 13029 constitutes the refereed proceedings of the 10th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2021, held in Qingdao, China, in October 2021.

The 66 full papers, 23 poster papers, and 27 workshop papers presented were carefully reviewed and selected from 446 submissions. They are organized in the following areas: Fundamentals of NLP; Machine Translation and Multilinguality; Machine Learning for NLP; Information Extraction and Knowledge Graph; Summarization and Generation; Question Answering; Dialogue Systems; Social Media and Sentiment Analysis; NLP Applications and Text Mining; and Multimodality and Explainability.

✦ Table of Contents


Preface
Organization
Contents – Part I
Contents – Part II
Oral - Fundamentals of NLP
Coreference Resolution: Are the Eliminated Spans Totally Worthless?
1 Introduction
2 Background
3 Coreference Resolution with Enhanced Mention Representation
3.1 Mention Detection
3.2 Coreference Resolving with Global Spans Perceived
4 Model Training
5 Experimentation
5.1 Experimental Settings
5.2 Experimental Results
5.3 Analysis on Context-Aware Word Representations
5.4 Case Study
6 Related Work
7 Conclusion
References
Chinese Macro Discourse Parsing on Dependency Graph Convolutional Network
1 Introduction
2 Related Work
3 Basic Model: MDParser-TS
4 Chinese Macro Discourse Parsing on Dependency Graph Convolutional Network
4.1 Internal Topic Graph Construction
4.2 Interactive Topic Graph Construction
4.3 Dependency Graph Convolutional Network
4.4 Classifier
5 Experimentation
5.1 Dataset and Experimental Settings
5.2 Baselines
5.3 Experimental Results
6 Analysis
6.1 Analysis on Internal Topic Graph
6.2 Analysis on Interactive Topic Graph
6.3 Experimentation on English RST-DT
7 Conclusion
References
Predicting Categorial Sememe for English-Chinese Word Pairs via Representations in Explainable Sememe Space
1 Introduction
2 Task Formalization
3 Methodology
3.1 Word Vector Space O and Sememe Space Os
3.2 HowNet in Sememe Space Os
3.3 Target Data in Sememe Space Os
3.4 Training and Prediction
4 Experiment
4.1 Datasets
4.2 Experiment Settings
4.3 Overall Results
4.4 Results on Different POS Tags
4.5 Results on Different Ambiguity Degrees
4.6 Effect of Descending Factor c
4.7 Effect of Training Set Ratio
4.8 Categorial Sememe Knowledge Base
5 Related Work
6 Conclusion and Future Work
References
Multi-level Cohesion Information Modeling for Better Written and Dialogue Discourse Parsing
1 Introduction
2 Related Work
3 Baseline Model
3.1 Attention-Based EDU Encoder
3.2 Top-Down Baseline Model
3.3 Bottom-Up Baseline Model
3.4 Deep Sequential Baseline Model
4 Cohesion Modeling
4.1 Auto Cohesion Information Extraction
4.2 Graph Construction
4.3 Cohesion Modelling
4.4 Fusion Layer
5 Experiments
5.1 Datasets
5.2 Metric
5.3 Experimental Result
6 Conclusion
References
ProPC: A Dataset for In-Domain and Cross-Domain Proposition Classification Tasks
1 Introduction
2 Dataset Construction
2.1 Proposition Definition
2.2 Data Acquisition
2.3 Data Annotation
2.4 Dataset Analysis
3 Experiments
3.1 Baseline Methods
3.2 Experimental Setup
3.3 Results and Analysis
4 Related Work
5 Conclusion
References
CTRD: A Chinese Theme-Rheme Discourse Dataset
1 Introduction
2 Related Work
3 Theory Basis
3.1 The Theme-Rheme Theory
3.2 The Thematic Progression Patterns
4 Annotation Scheme
4.1 Theme-Rheme Annotation Criteria
4.2 Thematic Progression Annotation Criteria
5 Statistics
6 Experiments and Analysis
6.1 Theme-Rheme Automatic Recognition
6.2 Function Types Automatic Recognition
7 Conclusion
References
Machine Translation and Multilinguality
Learning to Select Relevant Knowledge for Neural Machine Translation
1 Introduction
2 Our Approach
2.1 Problem Definition
2.2 Retrieval Stage
2.3 Machine Translation via Selective Context
2.4 Multi-task Learning Framework
3 Evaluation and Datasets
3.1 Evaluation
3.2 Datasets
3.3 Training Details
3.4 Baselines
3.5 Results
4 Analysis
5 Related Work
6 Conclusion
References
Contrastive Learning for Machine Translation Quality Estimation
1 Introduction
2 Related Work
2.1 Machine Translation Quality Estimation
2.2 Contrastive Learning
3 Our Method
3.1 Denoising Reconstructed Samples
3.2 Contrastive Training
4 Experiments
4.1 Setup
4.2 Results and Analysis
4.3 Different Methods to Create Negative Samples
4.4 Compare with Metric-Based Method
5 Conclusion
References
Sentence-State LSTMs For Sequence-to-Sequence Learning
1 Introduction
2 Approach
2.1 Sentence-State LSTM Encoder
2.2 Comparison with RNNs, CNNs and Transformer
2.3 LSTM Decoder
2.4 Training
3 Experiments
3.1 Main Results
4 Analysis
4.1 Ablation Study
4.2 Effect of Recurrent Steps
5 Related Work
5.1 Seq2seq Modeling
5.2 Efficient Sequence Encoding
6 Conclusion
References
Guwen-UNILM: Machine Translation Between Ancient and Modern Chinese Based on Pre-Trained Models
1 Introduction
2 Related Work
3 The Guwen-UNILM Framework
3.1 Pre-training Step
3.2 Fine-Tuning Step
4 Experiment
4.1 Datasets
4.2 Experimental Setup
4.3 Comparative Models
4.4 Evaluation Metrics
4.5 Results and Discussion
5 Conclusion
References
Adaptive Transformer for Multilingual Neural Machine Translation
1 Introduction
2 Related Work
3 Background
4 Proposed Method
4.1 Adaptive Transformer
4.2 Adaptive Attention Layer
4.3 Adaptive Feed-Forward Layer
5 Experiments
5.1 Dataset
5.2 Model Configurations
5.3 Main Results
5.4 Ablation Study
5.5 Analysis on Shared Rate
5.6 Analysis on Low-Resource Language
6 Conclusion and Future Work
References
Improving Non-autoregressive Machine Translation with Soft-Masking
1 Introduction
2 Background
2.1 Autoregressive Machine Translation
2.2 Non-autoregressive Machine Translation
3 Method
3.1 Encoder
3.2 Decoder
3.3 Discriminator
3.4 Glancing Training
4 Experiments
4.1 Experiment Settings
4.2 Main Results
4.3 Decoding Speed
5 More Analysis
6 Related Works
7 Conclusion
References
Machine Learning for NLP
AutoNLU: Architecture Search for Sentence and Cross-sentence Attention Modeling with Re-designed Search Space
1 Introduction
2 Search Space Design
2.1 Meta-architectures
2.2 Encoder Operations
2.3 Aggregator Search Space
2.4 Design Choices
3 Architecture Search
3.1 Search Algorithm
3.2 Child Model Training
3.3 Improving Weight Sharing
3.4 Search Warm-Up
4 Experiments and Discussion
4.1 Datasets
4.2 Architecture Search Protocols
4.3 Results
4.4 Ablation on Our Strategies
4.5 Ablation on Our Search Space
5 Conclusion and Future Work
References
AutoTrans: Automating Transformer Design via Reinforced Architecture Search
1 Introduction
2 Related Work
3 Search Space Design
4 Architecture Search
4.1 Search Algorithm
4.2 Deriving Architectures
4.3 Cross-operation Parameter Sharing
4.4 Cross-layer Parameter Sharing
5 Experiments and Results
5.1 Datasets
5.2 Architecture Search Protocols
5.3 Main Results
5.4 Effects of Proportions of Training Data
5.5 Effects of Different Learning Rates on the Learned Architecture
5.6 Effects of Learning Rate on Search
6 Conclusions and Discussions
References
A Word-Level Method for Generating Adversarial Examples Using Whole-Sentence Information
1 Introduction
2 Related Work
3 Methodology
3.1 Selecting Candidate Substitutes
3.2 Searching for Adversarial Examples
4 Experiments
4.1 Setup
4.2 Results
5 Analysis and Discussions
5.1 Ablation Analyses
5.2 Effect of Beam Size
5.3 Adversarial Training
6 Conclusion
References
RAST: A Reward Augmented Model for Fine-Grained Sentiment Transfer
1 Introduction
2 Methodology
2.1 Overview
2.2 Encoder-Decoder Based Sentiment Transfer Model
2.3 Comparative Discriminator
2.4 Reward Augmented Training of Sentiment Transfer Model
3 Experiments
3.1 Experiment Settings
3.2 Evaluation Metrics
3.3 Results and Analysis
3.4 Ablation Study
3.5 Case Study
4 Related Work
5 Conclusion
References
Pre-trained Language Models for Tagalog with Multi-source Data
1 Introduction
2 Related Previous Research
2.1 Natural Language Processing for Tagalog
2.2 Pre-trained Language Model for Tagalog
3 Model
3.1 BERT
3.2 RoBERTa
3.3 ELECTRA
4 Pre-training Corpus
4.1 Oscar
4.2 Wiki
4.3 News
5 Experiment
5.1 Downstream Tasks
5.2 Pre-training
5.3 Fine-Tuning
5.4 Experiment Results and Analysis
6 Conclusion
References
Accelerating Pretrained Language Model Inference Using Weighted Ensemble Self-distillation
1 Introduction
2 Weighted Ensemble Self-distillation
2.1 Early Exiting
2.2 Weighted Ensemble Self-distillation
2.3 Adaptive Inference
3 Experiments
3.1 Datasets and Evaluation Metrics
3.2 Baselines
3.3 Implementation Details
3.4 Comparative Results
3.5 Ablation Experiments
3.6 The Effect of Weighted Ensemble Self-distillation
4 Conclusions
References
Information Extraction and Knowledge Graph
Employing Sentence Compression to Improve Event Coreference Resolution
1 Introduction
2 Related Work
3 Event Coreference Resolution on Sentence Compression
3.1 Event Extraction
3.2 Event Sentence Compression
3.3 Event Coreference Resolution
4 Experimentation
4.1 Experimental Settings
4.2 Results on Event Extraction
4.3 Results on Event Coreference Resolution
5 Analysis
5.1 Ablation Study
5.2 Analysis on Different Compression Strategies
5.3 Case Study
6 Conclusions
References
BRCEA: Bootstrapping Relation-Aware Cross-Lingual Entity Alignment
1 Introduction
2 Related Work
2.1 KG Embedding
2.2 Graph Convolutional Network
2.3 Cross-Lingual Entity Alignment
3 Our Approach: BRCEA
3.1 Problem Formulation
3.2 Overview
3.3 Training Data Construction
3.4 Attribute Embedding
3.5 Bootstrapping
3.6 Alignment Prediction
3.7 Training
4 Experiments
4.1 Datasets
4.2 Experiment Settings
5 Results
5.1 Main Results
5.2 Ablation Studies
5.3 Sensitivity to Proportion of Prior Alignments
6 Conclusion and Future Work
References
Employing Multi-granularity Features to Extract Entity Relation in Dialogue
1 Introduction
2 Related Work
3 Methodology
3.1 Problem Definition
3.2 Architecture
3.3 BERT Encoder
3.4 Coarse-Grained Feature Extractor
3.5 Fine-Grained Feature Extractor
3.6 Classifier
4 Experimentation
4.1 Experimental Settings
4.2 Experimental Results
4.3 Ablation Study
5 Conclusion
References
Attention Based Reinforcement Learning with Reward Shaping for Knowledge Graph Reasoning
1 Introduction
2 Related Work
3 Approach
3.1 Task Definition
3.2 Reinforcement Learning Formulation
3.3 Attention-Based Policy Network
3.4 Reward Shaping
3.5 Training
4 Experiments
4.1 Experiment Setup
4.2 Results and Analysis
4.3 Ablation Study
4.4 Case Study
5 Conclusion
References
Entity-Aware Relation Representation Learning for Open Relation Extraction
1 Introduction
2 Related Work
3 Problem Definition
4 Method
4.1 Entity-Aware Relation Representation Learning
4.2 Relation Clustering
5 Experiments
5.1 Datasets
5.2 Datasets Division
5.3 Baseline and Model
5.4 Evaluation Metrics
5.5 Implementation Details
5.6 Results and Analysis
6 Conclusions
References
ReMERT: Relational Memory-Based Extraction for Relational Triples
1 Introduction
2 Related Work
3 Methodology
3.1 Bert Encoder
3.2 ReMERT Decoder
4 Experiments
4.1 Experimental Settings
4.2 Datasets and Metrics
4.3 Baselines
4.4 Results and Analysis
5 Conclusion
References
Recognition of Nested Entity with Dependency Information
1 Introduction
2 Motivation
3 Model
3.1 Word Representation
3.2 Encoder
3.3 Decoder
3.4 Loss Function
4 Experiment
4.1 Experimental Settings
4.2 Performance of Decoder
4.3 Discussion
5 Related Work
6 Conclusion
References
HAIN: Hierarchical Aggregation and Inference Network for Document-Level Relation Extraction
1 Introduction
2 Methodology
2.1 Model Overview
2.2 Context Encoder
2.3 Meta Dependency Graph
2.4 Mention Interaction Graph
2.5 Entity Inference Graph
2.6 Relation Classification
3 Experiments
3.1 Dataset
3.2 Baseline Models
3.3 Experimental Setup
3.4 Main Results
3.5 Detail Analysis
4 Related Work
5 Conclusion
References
Incorporate Lexicon into Self-training: A Distantly Supervised Chinese Medical NER
1 Introduction
2 Related Work
2.1 Distantly Supervised NER
2.2 Self-training
3 Our Method
3.1 Distantly Supervised Method Annotating Data
3.2 High Recall Self-training
3.3 Fine-Grained Lexicon Enhanced Scoring and Ranking
3.4 Iteration Process
4 Experiments
4.1 Dataset
4.2 Evaluation
4.3 Experiment Setting
4.4 Main Results
4.5 Ablation Study
5 Conclusions
References
Summarization and Generation
Diversified Paraphrase Generation with Commonsense Knowledge Graph
1 Introduction
2 Related Works
2.1 Neural Paraphrase Generation
2.2 Knowledge-Enhanced Generation
3 Our Approach
3.1 Knowledge Retrieval
3.2 Paraphrases and Latent Concept Space Encoding
3.3 Diversified Generation
4 Experiments
4.1 Dataset
4.2 Experimental Setup
4.3 Results
4.4 Ablation Study
4.5 Case Study
5 Conclusion
References
Explore Coarse-Grained Structures for Syntactically Controllable Paraphrase Generation
1 Introduction
2 Related Work
3 Our Approach
3.1 Problem Formalization
3.2 Overall Architecture
3.3 Sentence Encoder
3.4 Syntactic Encoder
3.5 Paraphrase Decoder
3.6 The Overall Objective Function
4 Experiments
4.1 Dataset
4.2 Experiment Setup
4.3 Automated Evaluation
4.4 Human Evaluation
4.5 Results
4.6 Model Analysis
5 Conclusion
References
Chinese Poetry Generation with Metrical Constraints
1 Introduction
2 Related Work
3 Methodology
3.1 Problem Formulation
3.2 The Dual-Encoder Model
4 Experiments
4.1 Setup
4.2 Comparison Methods
4.3 Evaluation Metrics
5 Results and Discussions
5.1 Automatic Evaluation
5.2 Human Evaluation
5.3 Case Study
5.4 Limitations
6 Conclusions
References
CNewSum: A Large-Scale Summarization Dataset with Human-Annotated Adequacy and Deducibility Level
1 Introduction
2 Related Work
3 The CNewSum Dataset
3.1 Data Collection
3.2 Adequacy and Deducibility Annotation
3.3 Dataset Analysis
4 Experiment
4.1 Models
4.2 Results
4.3 Case Study
5 Conclusion
References
Question Generation from Code Snippets and Programming Error Messages
1 Introduction
2 Dataset and Experiment Setup
3 Method
3.1 Models
3.2 Pre-training
3.3 Fine-Tuning
3.4 Model Settings
3.5 Automatic Metric
4 Experiments
4.1 Baselines
4.2 Main Result
4.3 Influence of Bi-modal Inputs
5 Related Works
6 Conclusion
References
Extractive Summarization of Chinese Judgment Documents via Sentence Embedding and Memory Network
1 Introduction
2 Related Work
3 Proposed Model
3.1 BERT-Based Encoder
3.2 Sentence Embedding Layer
3.3 Memory Network
3.4 Classification and Extraction
4 Experiments
4.1 Settings
4.2 Comparison of Results
5 Conclusion
References
Question Answering
ThinkTwice: A Two-Stage Method for Long-Text Machine Reading Comprehension
1 Introduction
2 Related Work
3 Method
3.1 Segmentor
3.2 Retriever
3.3 Fusion
3.4 Reader
4 Experiments
4.1 Experimental Settings
4.2 Results
4.3 Case Study
5 Conclusion
References
EviDR: Evidence-Emphasized Discrete Reasoning for Reasoning Machine Reading Comprehension
1 Introduction
2 Related Work
3 Methodology
3.1 Encoding Module
3.2 Evidence Detector
3.3 Evidence-Emphasized Reasoning Graph
3.4 Prediction Module
4 Training with Distant Supervision
5 Experiment
5.1 Dataset and Evaluation Metrics
5.2 Experiment Settings
5.3 Baselines
5.4 Main Results
5.5 Ablation Study
5.6 Case Study
6 Conclusion and Future Work
References
Dialogue Systems
Knowledge-Grounded Dialogue with Reward-Driven Knowledge Selection
1 Introduction
2 Related Work
3 Methodology
3.1 Dialogue Module
3.2 Knowledge Encoder
3.3 Knowledge Selector
3.4 Modules Integration
4 Experiments
4.1 Datasets
4.2 Models for Comparison
4.3 Implement Details
4.4 Automatic Evaluation
4.5 Human Evaluation
4.6 Analysis
5 Conclusion
References
Multi-intent Attention and Top-k Network with Interactive Framework for Joint Multiple Intent Detection and Slot Filling
1 Introduction
2 Problem Definition
3 Model
3.1 Interactive Framework
3.2 Multi-intent Attention and Top-k NetWork
3.3 Decoder
3.4 Joint Training
4 Experiments
4.1 Datasets
4.2 Implementation Details
4.3 Main Results
4.4 Ablation Study
5 Related Work
6 Conclusion and Future Work
References
Enhancing Long-Distance Dialogue History Modeling for Better Dialogue Ellipsis and Coreference Resolution
1 Introduction
2 Model
2.1 User Utterance Encoder
2.2 Speaker Highlight Dialogue History Encoder
2.3 Decoder
2.4 Top-Down Hierarchical Copy Mechanism
3 Experiment
3.1 Data and Metrics
3.2 Experimental Settings
3.3 Experimental Results
3.4 Ablation Study
4 Case Study
5 Related Work
6 Conclusion
References
Exploiting Explicit and Inferred Implicit Personas for Multi-turn Dialogue Generation
1 Introduction
2 Related Work
2.1 Persona-Based Dialogue Model
2.2 von Mises-Fisher Distribution
3 The Proposed Model
3.1 Explicit Persona Extractor
3.2 Implicit Persona Inference
3.3 Persona Response Generator
3.4 Training Objective
4 Experiments
4.1 Experimental Settings
4.2 Experimental Results
5 Conclusion and Future Work
References
Few-Shot NLU with Vector Projection Distance and Abstract Triangular CRF
1 Introduction
2 Related Work
3 Problem Formulation
4 Our Proposed Few-Shot NLU Model
4.1 Support Set Reader
4.2 Semantic Parser
5 Experiment
5.1 Settings
5.2 Baselines
5.3 Main Results
5.4 Analysis
6 Conclusion
References
Cross-domain Slot Filling with Distinct Slot Entity and Type Prediction
1 Introduction
2 Our Approach
2.1 Slot Entity Model
2.2 Slot Type Model
2.3 Instance Weighting Scheme
3 Experiment
3.1 Dataset
3.2 Baselines
3.3 Implementation Details
3.4 Overall Results
3.5 Analysis on Slot Entity Identification
3.6 Ablation Study
3.7 Analysis on Seen and Unseen Slots
4 Conclusions
References
Social Media and Sentiment Analysis
Semantic Enhanced Dual-Channel Graph Communication Network for Aspect-Based Sentiment Analysis
1 Introduction
2 Related Work
3 Methods
3.1 Graph Convolutional Network
3.2 Sentence Encoder
3.3 Graph Encoder
3.4 Hierarchical Aspect-Based Attention
3.5 Model Training
4 Experiments
4.1 Datasets
4.2 Experimental Settings
4.3 Baselines
4.4 Results
4.5 Ablation Study
4.6 Case Study
5 Conclusion
References
Highway-Based Local Graph Convolution Network for Aspect Based Sentiment Analysis
1 Introduction
2 Related Work
3 Methodology
3.1 Bi-GRU Layer
3.2 Global Graph Convolution Module
3.3 Local Graph Convolutional Module
3.4 Aspect-Specific Mask and Context-Specific Mask
3.5 Sentiment Classification
3.6 Model Training
4 Experiment
4.1 Experimental Setting
4.2 Results
4.3 Number of GCN Layers
4.4 Ablation Study
4.5 Visualized Analysis
5 Conclusion
References
Dual Adversarial Network Based on BERT for Cross-domain Sentiment Classification
1 Introduction
2 Related Work
3 Model
3.1 Problem Definition
3.2 Further Pre-training
3.3 Dual Adversarial Network Based on BERT
4 Experiment
4.1 Dataset
4.2 Implementation Details
4.3 Baselines
4.4 Feature Visualization
4.5 Ablation Studies
4.6 Effects of K
5 Conclusion
References
Syntax and Sentiment Enhanced BERT for Earliest Rumor Detection
1 Introduction
2 Related Work
3 Problem Formulation
4 The Proposed SSE-BERT Model
4.1 Overall Architecture
4.2 Dependency Tree Encoding
4.3 Sentiment Words Recognition
4.4 Results Prediction
5 Experiments
5.1 Datasets and Settings
5.2 Results of Earliest Detection
5.3 Ablation Study
5.4 Analysis Study
6 Conclusion
References
Aspect-Sentiment-Multiple-Opinion Triplet Extraction
1 Introduction
2 Related Work
3 Aspect-Guided Framework (AGF)
3.1 Framework
3.2 Encoders
3.3 Stage One: Aspect Term Extraction (ATE)
3.4 Stage Two
4 Experiments
4.1 Datasets and Metrics
4.2 Our Methods
4.3 Implementation Details
4.4 Comparison Methods
4.5 Results
4.6 Ablation Study
4.7 Visualization of Attentions
5 Conclusion
References
Locate and Combine: A Two-Stage Framework for Aspect-Category Sentiment Analysis
1 Introduction
2 Related Work
2.1 Aspect Category Sentiment Classification
2.2 Aspect Term Sentiment Classification
3 Approach
3.1 Locate" Stage 3.2Combine" Stage
4 Experiment
4.1 Datasets
4.2 Training Details
4.3 Baselines
4.4 Experimental Results
4.5 Ablation Study
4.6 Case Study
4.7 Error Analysis
5 Conclusion
References
Emotion Classification with Explicit and Implicit Syntactic Information
1 Introduction
2 Related Work
3 Proposed Framework
3.1 Baseline Model
3.2 Dependency Parser
3.3 Explicit Method
3.4 Implicit Method
3.5 Fusion of Explicit and Implicit Information
4 Experiments
4.1 Settings
4.2 Base Models
4.3 Main Results
4.4 The Influence of Sentence Length
4.5 Case Study
5 Conclusion
References
MUMOR: A Multimodal Dataset for Humor Detection in Conversations
1 Introduction
2 Dataset
2.1 Data Source
2.2 Data Format
2.3 Data Annotation
3 Dataset Analysis
4 Comparison with Existing Dataset
5 Conclusion and Future Work
References
NLP Applications and Text Mining
Advertisement Extraction from Content Marketing Articles via Segment-Aware Sentence Classification
1 Introduction
2 Related Work
2.1 Text Classification
2.2 Topic Models
3 Method
3.1 Problem Definition
3.2 Topic-Enhanced Neural Network
3.3 Segment-Aware Sentence Classification
4 Experiment
4.1 Experimental Settings
4.2 Experimental Results
4.3 Impacts of Topics
4.4 Impacts of the Parameter
4.5 Ablation Analysis
5 Conclusion
References
Leveraging Lexical Common-Sense Knowledge for Boosting Bayesian Modeling
1 Introduction
2 Preliminary
2.1 Definition
2.2 Probase
3 Methodology
3.1 RegBayes Framework
3.2 LRegBayes with Lexical Common-Sense Knowledge
3.3 LDA Driven by lRegBayes
4 Experiments
4.1 Datasets
4.2 Alternative Algorithms
4.3 Experiment Settings
4.4 Performance Summary
5 Conclusions
References
Aggregating Inter-viewpoint Relationships of User's Review for Accurate Recommendation
1 Introduction
2 The Proposed Model
2.1 Contextual Viewpoints Learning Component
2.2 High-Level Features Aggregation Component
2.3 Feature Interaction Component
3 Experiments
3.1 Experimental Settings
3.2 Performance Evaluation
3.3 Effectiveness of Penalization Term
3.4 Analysis of Capsule Networks
4 Conclusion
References
A Residual Dynamic Graph Convolutional Network for Multi-label Text Classification
1 Introduction
2 Related Work
3 Method
3.1 Encoding Layer
3.2 Label Attention Mechanism
3.3 Residual Dynamic GCN
3.4 Label Prediction
4 Experiments
4.1 Datasets
4.2 Evaluation Metrics
4.3 Baselines
4.4 Experimental Setting
4.5 Results
4.6 Ablation Study
4.7 Comparison with the Traditional Residual Connection
4.8 Analysis of Different GCN Layers
5 Conclusion
References
Sentence Ordering by Context-Enhanced Pairwise Comparison
1 Introduction
2 Related Work
2.1 Sequence Generating Models
2.2 Pairwise Models
3 Task Description
4 Methodology
4.1 Overview
4.2 Global Information Encoder
4.3 Local Information Encoder
4.4 Post-fusion
4.5 Organizing into Text
5 Experiments
5.1 Dataset
5.2 Baselines
5.3 Evaluation Metric
5.4 Results
5.5 Ablation Study
5.6 Comparing with Pre-fusion
6 Case Study
7 Conclusion
References
A Dual-Attention Neural Network for Pun Location and Using Pun-Gloss Pairs for Interpretation
1 Introduction
2 Related Work
2.1 Pun Location
2.2 Pun Interpretation
3 Methodology
3.1 Pun Location
3.2 Pun Interpretation
4 Experiment Settings
4.1 Dataset and Evaluation Metrics
4.2 Baselines
4.3 Hyperparameters
5 Experimental Results and Analysis
5.1 Pun Location
5.2 Pun Interpretation
5.3 Analysis
6 Conclusions and Future Work
References
A Simple Baseline for Cross-Domain Few-Shot Text Classification
1 Introduction
2 Related Work
3 Background
4 A Simple Baseline for XFew
4.1 Pre-training Stage
4.2 Induction Stage
5 Experiment
5.1 Datasets and Evaluation Metrics
5.2 Baselines
5.3 Implementation Details
5.4 Comparison Results
6 Conclusion and Future Work
References
Shared Component Cross Punctuation Clauses Recognition in Chinese
1 Introduction
2 Related Work
3 Dataset
3.1 Details
4 Methodology
4.1 Overall Architecture
4.2 Learning
5 Experimental Results
5.1 Experiment Setting
5.2 Main Results
5.3 Ablation Study
5.4 Discussions
6 Conclusion
References
BERT-KG: A Short Text Classification Model Based on Knowledge Graph and Deep Semantics
1 Introduction
2 Related Work
3 BERT-KG Model
3.1 Framework
3.2 Sentence Tree Generation
3.3 Extended Short Text Location Coding
3.4 Whole Word Masking Transformer Model
3.5 Model Training
4 Experiments and Results Analysis
4.1 Experimental Setting
4.2 Experimental Metrics
4.3 Comparison Model Selection
5 Conclusions
References
Uncertainty-Aware Self-paced Learning for Grammatical Error Correction
1 Introduction
2 Background
2.1 Grammatical Error Correction
2.2 Bayesian Neural Network
3 Methodology
3.1 Sample Selection
3.2 Confidence-Based Self-paced Learning
4 Experiments
4.1 Datasets and Evaluation Metrics
4.2 Experimental Setting
4.3 Main Results
4.4 Analysis of Filter Ratio p
4.5 Analysis of Temperature
4.6 Analysis of Self-paced Learning
5 Related Work
5.1 Grammatical Error Correction
5.2 Sample Selection
5.3 Uncertainty-Aware Learning
6 Conclusion
References
Metaphor Recognition and Analysis via Data Augmentation
1 Introduction
2 Related Work
2.1 Metaphor Recognition
2.2 Data Augmentation
3 Data Augmentation Based on Sentence Reconstruction
3.1 Sentence Reconstruction
3.2 Data Augmentation
4 Experiments
4.1 Datasets
4.2 Baselines
4.3 Experimental Results and Analysis
5 Conclusion and Future Work
References
Exploring Generalization Ability of Pretrained Language Models on Arithmetic and Logical Reasoning
1 Introduction
2 Task
3 Experiments
3.1 Experimental Settings
3.2 In-Distribution Generalization
3.3 Cross-Distribution Generalization
4 Related Work
5 Conclusion
References
Multimodality and Explainability
Skeleton-Based Sign Language Recognition with Attention-Enhanced Graph Convolutional Networks
1 Introduction
2 AEGCNs Method
2.1 Graph Construction
2.2 Adaptive Graph Convolution
2.3 Network Architecture
3 Experiments
3.1 Datasets
3.2 Implementation Details
3.3 Ablation Study of Our AEGCNs
3.4 Comparison with the State-of-the-Arts
3.5 Visualization of the Learned Graphs
3.6 Summary of Performance Analysis
4 Conclusions
References
XGPT: Cross-modal Generative Pre-Training for Image Captioning
1 Introduction
2 Related Work
3 Preliminaries
4 Cross-modal Generative Pre-Training for Image Captioning
4.1 Model Architecture
4.2 Generative Pre-training Tasks
4.3 XGPT Pre-training
5 Experiments and Results
5.1 Datasets
5.2 Implementation Details
5.3 Results and Analysis
5.4 Data Augmentation for Image Retrieval
5.5 Qualitative Studies
6 Conclusion
References
An Object-Extensible Training Framework for Image Captioning
1 Introduction
2 Related Work
3 Methodology
3.1 Framework Overview
3.2 Multi-modal Context Embedding
3.3 Object Grounding
3.4 Context-Aware Replacement for Automatic Data Generation
4 Experiments
4.1 Experimental Setup
4.2 Comparison with SOTA Methods
4.3 Human Evaluation
4.4 Qualitative Examples
4.5 Discussion
5 Conclusion
References
Relation-Aware Multi-hop Reasoning for Visual Dialog
1 Introduction
2 Related Work
2.1 Visual Dialog
2.2 Multi-hop Reasoning
3 Background
4 Proposed Model
4.1 Encoders
4.2 Multi-hop Reasoner
4.3 Answer Ranker
5 Experiment
5.1 VisDial Dataset and Metrics
5.2 Implementation Details
5.3 Quantitative Results
6 Conclusion
References
Multi-modal Sarcasm Detection Based on Contrastive Attention Mechanism
1 Introduction
2 Related Work
2.1 Sarcasm Detection
2.2 Multi-modal Affective Analysis
3 Methodology
3.1 Overview
3.2 Utterance Representation
3.3 Sequential Context Encoder
3.4 Contrastive-Attention-Based Encoder
3.5 Linear Decoder
4 Experiment
4.1 Setup
4.2 Results and Analysis
5 Conclusion
References
Author Index


📜 SIMILAR VOLUMES


Natural Language Processing and Chinese
✍ Lu Wang (editor), Yansong Feng (editor), Yu Hong (editor), Ruifang He (editor) 📂 Library 📅 2021 🏛 Springer 🌐 English

<span>This two-volume set of LNAI 13028 and LNAI 13029 constitutes the refereed proceedings of the 10th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2021, held in Qingdao, China, in October 2021.</span><p><span>The 66 full papers, 23 poster papers, and 27 workshop paper

Natural Language Processing and Chinese
✍ Lu Wang (editor), Yansong Feng (editor), Yu Hong (editor), Ruifang He (editor) 📂 Library 📅 2021 🏛 Springer 🌐 English

<span>This two-volume set of LNAI 13028 and LNAI 13029 constitutes the refereed proceedings of the 10th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2021, held in Qingdao, China, in October 2021.</span><p><span>The 66 full papers, 23 poster papers, and 27 workshop paper

Natural Language Processing and Chinese
✍ Fei Liu (editor), Nan Duan (editor), Qingting Xu (editor), Yu Hong (editor) 📂 Library 📅 2023 🏛 Springer 🌐 English

<p><span>This three-volume set constitutes the refereed proceedings of the 12th National CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2023, held in Foshan, China, during October 12–15, 2023.</span></p><p><span> The 143 regular papers included in these proceedings were c

Natural Language Processing and Chinese
✍ Wei Lu (editor), Shujian Huang (editor), Yu Hong (editor), Xiabing Zhou (editor) 📂 Library 📅 2022 🏛 Springer 🌐 English

<span>This two-volume set of LNAI 13551 and 13552 constitutes the refereed proceedings of the 11th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2022, held in Guilin, China, in September 2022.</span><p><span>The 62 full papers, 21 poster papers, and 27 workshop papers pr

Natural Language Processing and Chinese
✍ Xiaodan Zhu, Min Zhang, Yu Hong, Ruifang He 📂 Library 📅 2020 🏛 Springer International Publishing;Springer 🌐 English

<p>This two-volume set of LNAI 12340 and LNAI 12341 constitutes the refereed proceedings of the 9th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2020, held in Zhengzhou, China, in October 2020.<p>The 70 full papers, 30 poster papers and 14 workshop papers presented were

Natural Language Processing and Chinese
✍ Fei Liu (editor), Nan Duan (editor), Qingting Xu (editor), Yu Hong (editor) 📂 Library 📅 2023 🏛 Springer 🌐 English

<p><span>This three-volume set constitutes the refereed proceedings of the 12th National CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2023, held in Foshan, China, during October 12–15, 2023.<br> The ____ regular papers included in these proceedings were carefully review