𝔖 Scriptorium
✦   LIBER   ✦

📁

Natural Language Processing and Chinese Computing: 10th CCF International Conference, NLPCC 2021, Qingdao, China, October 13–17, 2021, Proceedings, Part II (Lecture Notes in Computer Science)

✍ Scribed by Lu Wang (editor), Yansong Feng (editor), Yu Hong (editor), Ruifang He (editor)


Publisher
Springer
Year
2021
Tongue
English
Leaves
647
Category
Library

⬇  Acquire This Volume

No coin nor oath required. For personal study only.

✦ Synopsis


This two-volume set of LNAI 13028 and LNAI 13029 constitutes the refereed proceedings of the 10th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2021, held in Qingdao, China, in October 2021.

The 66 full papers, 23 poster papers, and 27 workshop papers presented were carefully reviewed and selected from 446 submissions. They are organized in the following areas: Fundamentals of NLP; Machine Translation and Multilinguality; Machine Learning for NLP; Information Extraction and Knowledge Graph; Summarization and Generation; Question Answering; Dialogue Systems; Social Media and Sentiment Analysis; NLP Applications and Text Mining; and Multimodality and Explainability.

✦ Table of Contents


Preface
Organization
Contents – Part II
Contents – Part I
Posters - Fundamentals of NLP
Syntax and Coherence - The Effect on Automatic Argument Quality Assessment
1 Introduction
2 Related Work
2.1 Theory Studies
2.2 Empirical Methods
3 Methodology
3.1 Input
3.2 Syntax Encoder
3.3 Coherence Encoder
3.4 Classification
4 Experiments
4.1 Dataset
4.2 Settings
4.3 Baselines
4.4 Results and Discussions
5 Case Study
5.1 Syntax Encoder
5.2 Coherence Encoder
6 Conclusions and Future Work
References
ExperienceGen 1.0: A Text Generation Challenge Which Requires Deduction and Induction Ability
1 Introduction
2 Related Work
3 Task Formulation
4 Dataset Construction
4.1 Causal Sentences and Syllogism
4.2 Get Candidate Sentences
4.3 Syllogism Reconstruction
4.4 Commonsense Knowledge Extraction
4.5 Quality Inspection of Dataset
4.6 Dataset Statistics
5 Experiments and Analysis
6 Result
6.1 Quantitative Analysis
6.2 Qualitative Analysis
7 Conclusion
References
Machine Translation and Multilinguality
SynXLM-R: Syntax-Enhanced XLM-R in Translation Quality Estimation
1 Introduction
2 Related Works
3 Methodology
3.1 XLM-R
3.2 Syntax-Aware Extractor
4 Experiments
4.1 Data Preparation
4.2 Baselines
4.3 Training Details
4.4 Correlation with DA Scores
5 Discussion
5.1 Effect of Different Parsers
5.2 Attention Heads in GAT
5.3 Limitations and Suggestions
6 Conclusion
References
Machine Learning for NLP
Memetic Federated Learning for Biomedical Natural Language Processing
1 Introduction
2 Related Work
3 Mem-Fed Framework
3.1 Overview
3.2 Local Training
3.3 Local Searching
3.4 Model Aggregation
4 Experiments
4.1 Experimental Setup
4.2 Quantitative Comparison
4.3 Group Number in Mem-Fed
4.4 Ablation Study on Memetic Aggregation
4.5 Local Searching Strategies of Mem-Fed
5 Conclusion
References
Information Extraction and Knowledge Graph
Event Argument Extraction via a Distance-Sensitive Graph Convolutional Network
1 Introduction
2 The DSGCN Model
2.1 Word Encoding
2.2 Distance-Sensitive Graph Convolutional Network
2.3 Task-Specific Pooling
2.4 Argument Classification
3 Experiments
4 Conclusion and Future Work
References
Exploit Vague Relation: An Augmented Temporal Relation Corpus and Evaluation
1 Introduction
2 Related Work
3 Data Annotation
3.1 Data
3.2 Annotation Process
3.3 Data Statistics
3.4 Annotation Quality
4 Temporal Relation Classification and Corpus Usage Strategy
4.1 Temporal Relation Classification Model
4.2 Corpus Usage Strategy
5 Experimentation
5.1 Experimental Settings
5.2 Experimental Results
5.3 Generalization
5.4 Error Analysis
5.5 Downsampling Training Set
6 Conclusion
References
Searching Effective Transformer for Seq2Seq Keyphrase Generation
1 Introduction
2 Methodology
2.1 Reduce Attention to Uninformative Content
2.2 Relative Multi-head Attention
3 Experiment Settings
3.1 Notations and Problem Definition
3.2 Datasets
3.3 Evaluation Metrics
3.4 Implementation Details
4 Results and Discussions
4.1 Applying Transformer to Keyphrase Generation
4.2 Tuning Transformer Model
4.3 Adapting Transformer to Keyphrase Generation
4.4 Observations and Findings
5 Related Work
6 Conclusion
References
Prerequisite Learning with Pre-trained Language and Graph Embedding Models
1 Introduction
2 Related Work
3 Proposed Approach
3.1 Text-Based Module
3.2 Graph-Based Module
3.3 Joint Learning of Two Modules
4 Experiments
4.1 Datasets
4.2 Evaluation Settings
4.3 Results
5 Conclusion
References
Summarization and Generation
Variational Autoencoder with Interactive Attention for Affective Text Generation
1 Introduction
2 Variational Autoencoder with Interactive Attention
2.1 Encoder
2.2 Variational Attention
2.3 Decoder
3 Experimental Results
3.1 Dataset
3.2 Evaluation Metrics
3.3 Implementation Details
3.4 Comparative Results
3.5 Ablation Experiment
3.6 Case Study
4 Conclusions
References
CUSTOM: Aspect-Oriented Product Summarization for E-Commerce
1 Introduction
2 Methodology
2.1 CUSTOM: Aspect-Oriented Product Summarization for E-Commerce
2.2 SMARTPHONE and COMPUTER
2.3 EXT: Extraction-Enhanced Generation Framework
3 Experiment
3.1 Comparison Methods
3.2 Implementation Details
3.3 Diversity Evaluation for CUSTOM
3.4 Quality Evaluation for EXT
3.5 Human Evaluation
3.6 Extractor Analysis
3.7 Case Study
4 Related Work
4.1 Product Summarization
4.2 Conditional Text Generation
5 Conclusion
References
Question Answering
FABERT: A Feature Aggregation BERT-Based Model for Document Reranking
1 Introduction
2 Related Work
3 Model
3.1 Problem Definition
3.2 QA Pairs Encoder
3.3 Feature Aggregation
4 Experiment and Results
4.1 Dataset and Baselines
4.2 Evaluation Metrics
4.3 Setting
4.4 Results
5 Conclusion
References
Generating Relevant, Correct and Fluent Answers in Natural Answer Generation
1 Introduction
2 Related Words
2.1 Natural Answer Generation
2.2 Text Editing
3 Splitting Answering Process into Template Generation and Span Extraction
3.1 Span Extraction
3.2 Template Generation with Editing
3.3 Filling the Template
3.4 Training
4 Selecting in Candidate Spans
4.1 Using Statistic in Training Data
4.2 Using Masked Language Model
4.3 Final Score
5 Experiments
5.1 Dataset and Settings
5.2 Results
5.3 Ablations
5.4 Discussion About Candidate Span Selection
5.5 Case Study
5.6 Advantages Compared to Extracting and Generative Models
6 Conclusion
References
GeoCQA: A Large-Scale Geography-Domain Chinese Question Answering Dataset from Examination
1 Introduction
2 Related Work
2.1 Machine Reading Comprehension
2.2 Open-Domain Question Answering
2.3 Comparison with Other Datasets
3 Dataset Collection and Analysis
3.1 Dataset Collection
3.2 Reasoning Types
4 Experiments
4.1 Rule-Based Method
4.2 Neural Models
4.3 Experiment Setting
4.4 Baseline Results
4.5 Error Analysis
5 Conclusion
References
Dialogue Systems
Generating Informative Dialogue Responses with Keywords-Guided Networks
1 Introduction
2 Related Work
3 Keywords-Guided Sequence-to-Sequence Model
3.1 Context Encoder and Response Decoder
3.2 Keywords Decoder and Keywords Encoder
3.3 The Cosine Annealing Mechanism
3.4 Keywords Acquisition
4 Experiments
4.1 Experiments Setting
4.2 Datasets
4.3 Automatic Evaluation
4.4 Human Evaluation
4.5 The Keywords Ratio
5 Conclusion
A Appendix
A.1 The Cosine Annealing Mechanism
A.2 Case Study
References
Zero-Shot Deployment for Cross-Lingual Dialogue System
1 Introduction
2 Problem Definition and Background
3 Approach
3.1 Pseudo Data Construction
3.2 Noise Injection Method
3.3 Multi-task Training and Adaptation
4 Experiments
4.1 Experimental Settings
4.2 Experimental Results and Analysis
5 Related Work
6 Conclusion
References
MultiWOZ 2.3: A Multi-domain Task-Oriented Dialogue Dataset Enhanced with Annotation Corrections and Co-Reference Annotation
1 Introduction
2 Annotation Corrections
2.1 Dialogue Act Corrections
2.2 Dialogue State Corrections
3 Enhance Dataset with Co-Referencing
3.1 Annotation for Co-reference in Dialogue
3.2 Annotation for Co-reference in User Goal
4 Benchmarks and Experimental Results
4.1 Dialogue Actions with Natural Language Understanding Benchmarks
4.2 Dialogue State Tracking Benchmarks
4.3 Experimental Analysis
5 Discussion
6 Conclusion
References
EmoDialoGPT: Enhancing DialoGPT with Emotion
1 Introduction
2 Related Work
3 Methodology
3.1 Model Architecture
3.2 Input Representation
3.3 Emotion Injection
3.4 Optimization
4 Dataset
4.1 Emotion Classifier
4.2 Dialogue Dataset with Emotion Labels
5 Experiments
5.1 Experimental Settings
5.2 Baselines
5.3 Automatic Evaluation of Emotion Expression
5.4 Automatic Evaluation of Response Quality
5.5 Human Evaluation
5.6 Case Study
6 Conclusion and Future Work
References
Social Media and Sentiment Analysis
BERT-Based Meta-Learning Approach with Looking Back for Sentiment Analysis of Literary Book Reviews
1 Introduction
2 Related Work
3 Method
3.1 BERT-Based Meta-Learning
3.2 Meta-Learning with Looking Back
4 Experiments
4.1 Dataset
4.2 Settings
4.3 Result
5 Conclusion
References
ISWR: An Implicit Sentiment Words Recognition Model Based on Sentiment Propagation
1 Introduction
2 Related Work
2.1 Implicit Sentiment Analysis
2.2 Sentiment Propagation
3 Implicit Sentiment Words Recognition Based on Sentiment Propagation
3.1 Construct Words Graph
3.2 Sentiment Propagation in Word Graph
4 Experiment and Analysis
4.1 Datasets and Evaluation Index
4.2 Data Preprocessing
4.3 Model Parameters Analysis
4.4 Compare Different Sentiment Propagation Models
5 Conclusion and Future Work
References
An Aspect-Centralized Graph Convolutional Network for Aspect-Based Sentiment Classification
1 Introduction
2 Related Work
3 Methodology
3.1 Constructing Aspect Centralized Graph
3.2 Aspect Centralized Graph Convolutional Network
3.3 Model Training
4 Experiments
4.1 Dataset and Experiment Setting
4.2 Comparison Model
4.3 Main Results
4.4 Ablation Study
5 Conclusion
References
NLP Applications and Text Mining
Capturing Global Informativeness in Open Domain Keyphrase Extraction
1 Introduction
2 Related Work
3 Methodology
4 Experimental Methodology
5 Results and Analyses
5.1 Overall Accuracy
5.2 Performance w.r.t. Keyphrase Lengths
5.3 Performance w.r.t. Keyphrase Types
5.4 Case Studies
6 Conclusion
References
Background Semantic Information Improves Verbal Metaphor Identification
1 Introduction
2 Related Work
3 Method
3.1 Problem Formulation
3.2 Pre-processing
3.3 Model
3.4 Training
4 Experiments
4.1 Data Preparation
4.2 Baselines
4.3 Setup
4.4 Result
4.5 Analyses
5 Conclusion
References
Multimodality and Explainability
Towards Unifying the Explainability Evaluation Methods for NLP
1 Introduction
2 Related Work
3 Explanation Generation
3.1 Classification Task and Dataset
3.2 Corpus Exploitation
3.3 Architectural Design
4 Experimental Setting
5 Evaluation
5.1 Quantitative Analysis
5.2 Qualitative Analysis
6 Conclusions
References
Explainable AI Workshop
Detecting Covariate Drift with Explanations
1 Introduction
1.1 Related Work
2 Methods
3 Experiments
4 Results
5 Discussion and Future Work
6 Conclusion
References
A Data-Centric Approach Towards Deducing Bias in Artificial Intelligence Systems for Textual Contexts
1 Introduction
1.1 Problem Statement
1.2 Research Proposal
2 Existing Literature
2.1 Defining Bias
2.2 Understanding Explainable AI
3 Our Proposed Work
3.1 Quantifying Bias in Textual Data
3.2 Results on Sample Text Extracts
4 Conclusion and Future Work
References
Student Workshop
Enhancing Model Robustness via Lexical Distilling
1 Introduction
2 Related Work
3 The Model
3.1 RNN-based Model: Auto-Encoder
3.2 Lexical Distiller
3.3 Discriminator
3.4 Training
4 Experiments
4.1 Data Set
4.2 Construct Noisy Input
4.3 Evaluation on Robustness
5 Conclusion
References
Multi-stage Multi-modal Pre-training for Video Representation
1 Introduction
2 Proposed Method
2.1 Model Architecture
2.2 MSMM Pre-training and Fine-Tuning
3 Experiment
3.1 Dataset
3.2 Implementation Details
3.3 Analysis
4 Related Work
5 Conclusion
References
Nested Causality Extraction on Traffic Accident Texts as Question Answering
1 Introduction
2 Related Works
3 Method
3.1 BERT
3.2 Cause Tagger
3.3 Effect Extraction
4 Experiment
4.1 Dataset
4.2 Results
5 Conclusion
References
Evaluation Workshop
MSDF: A General Open-Domain Multi-skill Dialog Framework
1 Introduction
2 Related Work
3 Method
3.1 Multi-skill Dialog
3.2 Overall Framework
3.3 Data Processing
4 Experiment
4.1 Experimental Settings
4.2 Evaluation
4.3 Discussion
5 Conclusion
References
RoKGDS: A Robust Knowledge Grounded Dialog System
1 Introduction
2 Related Work
3 Method
3.1 Bucket Encoder
3.2 Hybrid Decoder
4 Experiments
4.1 Dataset
4.2 Implement Details
4.3 Baseline Models
4.4 Automatic Evaluation
4.5 Human Evaluation
4.6 Visualization
4.7 Conclusions
References
Enhanced Few-Shot Learning with Multiple-Pattern-Exploiting Training
1 Introduction
2 Background
2.1 Task Definition
2.2 Formulation
3 Methodology
3.1 MPET: Multiple-Pattern-Exploiting Training
3.2 Post-training
3.3 Multi-stage Training
4 Experiment
4.1 Settings
4.2 Results
5 Conclusion
References
BIT-Event at NLPCC-2021 Task 3: Subevent Identification via Adversarial Training
1 Introduction
2 BIT-Event's System
2.1 Active Learning
2.2 Adversarial Training
2.3 Semi-supervised Learning
2.4 Ensemble
3 Experiments
3.1 Dataset
3.2 Experiment Setting
3.3 Results
3.4 Discussions
4 Related Work
5 Conclusion
References
Few-Shot Learning for Chinese NLP Tasks
1 Introduction
2 Task Description
2.1 Task Overview
2.2 Evaluation Criteria
3 Dataset Description
4 Baselines
5 Submissions
5.1 Alibaba DAMO Academy and Computing Platform PAI (5221721En33FigaPrint.eps )
5.2 Tencent Cloud Dingdang Education (5221721En33FigbPrint.eps )
5.3 Changhong AI Laboratory (5221721En33FigcPrint.eps )
5.4 Business Intelligence Laboratory of Baidu Research Institute (5221721En33FigdPrint.eps )
5.5 Team from Beijing University of Posts and Telecommunications (5221721En33FigePrint.eps )
5.6 Team from Zhejiang University (5221721En33FigfPrint.eps )
6 Shared-Task Results
7 Conclusion
References
When Few-Shot Learning Meets Large-Scale Knowledge-Enhanced Pre-training: Alibaba at FewCLUE
1 Introduction
2 Related Work
2.1 Pre-trained Language Models
2.2 Knowledge-Enhanced Pre-trained Language Models
2.3 Few-Shot Learning for Pre-trained Language Models
3 The Proposed Approach
3.1 Task Description
3.2 Knowledge-Enhanced Pre-training
3.3 Continual Pre-training
3.4 Fuzzy-PET Algorithm for Few-Shot Fine-Tuning
4 Experiments
4.1 Experimental Details
4.2 Experimental Results
5 Concluding Remarks
References
TKB2ert: Two-Stage Knowledge Infused Behavioral Fine-Tuned BERT
1 Introduction
2 Related Work
3 Data Description
4 Method
4.1 Stage 1: Behavioral Fine-Tuning
4.2 Stage 2: Attentive Reader
4.3 Post-processing
5 Experiment
5.1 Settings
5.2 Ensemble Strategy
5.3 Results on Test Set 1
5.4 Results on Test Set 2
6 Conclusion
References
A Unified Information Extraction System Based on Role Recognition and Combination
1 Introduction
2 Task Data and Analysis
2.1 Sentence-Level Event Extraction
2.2 Document-Level Event Extraction
2.3 Relation Extraction
2.4 Data Analysis
3 Model
3.1 Multi-label Pointer Network
3.2 Co-occurrence Matrix
3.3 Roles Combination
3.4 Enumeration Type Classification
4 Experiments and Results
4.1 Implement Details
4.2 Ensemble
4.3 Results
5 Conclusion
References
An Effective System for Multi-format Information Extraction
1 Introduction
2 Related Work
3 Methodology
3.1 Relation Extraction
3.2 Sentence-Level Event Extraction
3.3 Document-Level Event Extraction
3.4 Loss Function
3.5 Model Enhancement Techniques
4 Experiments
4.1 Basic Settings
4.2 Results
5 Conclusions
References
A Hierarchical Sequence Labeling Model for Argument Pair Extraction
1 Introduction
2 Relative Work
3 Approach
3.1 Task Defination
3.2 Argument Tagger
3.3 Argument Representer
3.4 Argument Pair Tagger
3.5 Training
4 Experimental Setup
4.1 Dataset
4.2 Evaluation Metric
4.3 Implementation Details
4.4 Compared Models
5 Results
5.1 Results of Basic Models
5.2 Results of Improved Models
5.3 Ensemble Approach
6 Conclusion
References
Distant Finetuning with Discourse Relations for Stance Classification
1 Introduction
2 Related Work
3 Unsupervised Data Preparation
3.1 Data D1 Extraction for Distant Finetuning
3.2 Low-Noise Finetuning Data D2 Extraction
3.3 Stance Detection Data in Other Languages
4 Staged Training with Noisy Finetuning
4.1 Distant Finetuning
4.2 Noisy and Clean Finetuning
4.3 Ensembling
5 Experiments
5.1 Encoders
5.2 Distance Finetuning
5.3 Stages of Finetuning
5.4 Added Noisy Samples in Finetuning
6 Conclusion
References
The Solution of Xiaomi AI Lab to the 2021 Language and Intelligence Challenge: Multi-format Information Extraction Task
1 Introduction
2 Related Work
3 Methods
3.1 Task of Relation Extraction
3.2 Task of Event Extraction
4 Experiments and Results
4.1 Task of Relation Extraction
4.2 Task of Event Extraction
5 Conclusion
References
A Unified Platform for Information Extraction with Two-Stage Process
1 Introduction
1.1 Relation Extraction (RE)
1.2 Sentence-Level Event Extraction (SentEE)
1.3 Document-Level Event Extraction (DocEE)
2 Model Description
2.1 Enhanced NER Module
2.2 Customized Manoeuvres
3 Experiments
3.1 Experimental Settings
3.2 Main Results
4 Conclusion
References
Overview of the NLPCC 2021 Shared Task: AutoIE2
1 Introduction
2 Related Work
3 Evaluation Task
3.1 Setting
3.2 Dataset
3.3 Baseline
4 Task Analysis
4.1 Factor Analysis
4.2 Submission Analysis
5 Conclusion
References
Task 1 - Argumentative Text Understanding for AI Debater (AIDebater)
1 Introduction
1.1 Background
1.2 Task Description
2 Related Work
3 Methodology
3.1 Roberta
3.2 MacBERT
3.3 Nezha
4 Result
References
Two Stage Learning for Argument Pairs Extraction
1 Introduction
2 Related Works
3 Method
4 Experiment and Results
4.1 Dataset and Metrics
4.2 Settings
4.3 Baselines
4.4 Main Results
5 Conclusions
References
Overview of Argumentative Text Understanding for AI Debater Challenge
1 Introduction
2 Related Works
2.1 Supporting Material Identification
2.2 Argument Pair Identification from Online Forums
2.3 Argument Pair Extraction from Peer Review and Rebuttal
3 Task Description
3.1 Track1: Supporting Material Identification
3.2 Track2: Argument Pair Identification from Online Forum
3.3 Track3: Argument Pair Extraction from Peer Review and Rebuttal
4 Challenge Details
4.1 Track1: Supporting Material Identification
4.2 Track2: Argument Pair Identification from Online Forum
4.3 Track3: Argument Pair Extraction from Peer Review and Rebuttal
5 Conclusion
References
ACE: A Context-Enhanced Model for Interactive Argument Pair Identification
1 Introduction
2 Proposed Model
2.1 Task Definition
2.2 Model Structure
2.3 Data Augmentation
3 Experiments
3.1 Datasets
3.2 Experiment Settings
3.3 Results and Analysis
3.4 Ablation Study
3.5 Case Study
4 Related Work
4.1 Dialogical Argumentation
4.2 Pre-trained Language Model
5 Conclusion
References
Context-Aware and Data-Augmented Transformer for Interactive Argument Pair Identification
1 Introduction
2 Related Work
3 Methodology
3.1 Problem Definition
3.2 Argument Pair Identification with Context
3.3 Data Augmentation
4 Experiments and Details
4.1 Dataset
4.2 Implement Details
4.3 Results and Analysis
5 Conclusion
References
ARGUABLY @ AI Debater-NLPCC 2021 Task 3: Argument Pair Extraction from Peer Review and Rebuttals
1 Introduction
2 Related Work
3 Task Description
4 Dataset Description
5 Methodology
5.1 Sentence Encoder
5.2 Bi-LSTM-CRF
5.3 Bi-LSTM-Linear
5.4 Multi-task Training
6 Experiments and Results
6.1 Phase-I
6.2 Phase-II
6.3 Analysis
7 Conclusion and Future Work
References
Sentence Rewriting for Fine-Tuned Model Based on Dictionary: Taking the Track 1 of NLPCC 2021 Argumentative Text Understanding for AI Debater as an Example
1 Introduction
2 Related Work
2.1 Non Pre-training Mode
2.2 Pre-training Model
3 Methodology
3.1 Dataset
3.2 Sentence Rewriting Based Dictionary for Fine-Tuned Model
4 Experiment
5 Conclusion and Expect
References
Knowledge Enhanced Transformers System for Claim Stance Classification
1 Introduction
2 Data Analyzation and Process
2.1 Data Process
3 Methodology
3.1 Text-Transformers
3.2 Knowledge Enhanced Text-Transformers
4 Experiments
4.1 Experimental Settings
4.2 Training Strategy
4.3 Results
5 Related Work
5.1 Ensemble Learning
5.2 Pre-training Language Models
6 Conclusion
References
Author Index


📜 SIMILAR VOLUMES


Natural Language Processing and Chinese
✍ Lu Wang (editor), Yansong Feng (editor), Yu Hong (editor), Ruifang He (editor) 📂 Library 📅 2021 🏛 Springer 🌐 English

This two-volume set of LNAI 13028 and LNAI 13029 constitutes the refereed proceedings of the 10th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2021, held in Qingdao, China, in October 2021.<p>The 66 full papers, 23 poster papers, and 27 workshop papers presented were ca

Natural Language Processing and Chinese
✍ Lu Wang (editor), Yansong Feng (editor), Yu Hong (editor), Ruifang He (editor) 📂 Library 📅 2021 🏛 Springer 🌐 English

<span>This two-volume set of LNAI 13028 and LNAI 13029 constitutes the refereed proceedings of the 10th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2021, held in Qingdao, China, in October 2021.</span><p><span>The 66 full papers, 23 poster papers, and 27 workshop paper

Natural Language Processing and Chinese
✍ Fei Liu (editor), Nan Duan (editor), Qingting Xu (editor), Yu Hong (editor) 📂 Library 📅 2023 🏛 Springer 🌐 English

<p><span>This three-volume set constitutes the refereed proceedings of the 12th National CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2023, held in Foshan, China, during October 12–15, 2023.<br> The ____ regular papers included in these proceedings were carefully review

Natural Language Processing and Chinese
✍ Xiaodan Zhu, Min Zhang, Yu Hong, Ruifang He 📂 Library 📅 2020 🏛 Springer International Publishing;Springer 🌐 English

<p>This two-volume set of LNAI 12340 and LNAI 12341 constitutes the refereed proceedings of the 9th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2020, held in Zhengzhou, China, in October 2020.<p>The 70 full papers, 30 poster papers and 14 workshop papers presented were

Natural Language Processing and Chinese
✍ Fei Liu (editor), Nan Duan (editor), Qingting Xu (editor), Yu Hong (editor) 📂 Library 📅 2023 🏛 Springer 🌐 English

<p><span>This three-volume set constitutes the refereed proceedings of the 12th National CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2023, held in Foshan, China, during October 12–15, 2023.</span></p><p><span> The 143 regular papers included in these proceedings were c

Natural Language Processing and Chinese
✍ Wei Lu (editor), Shujian Huang (editor), Yu Hong (editor), Xiabing Zhou (editor) 📂 Library 📅 2022 🏛 Springer 🌐 English

<span>This two-volume set of LNAI 13551 and 13552 constitutes the refereed proceedings of the 11th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2022, held in Guilin, China, in September 2022.</span><p><span>The 62 full papers, 21 poster papers, and 27 workshop papers pr