<p>The research and its outcomes presented in this book, is about lexicon-based sentiment analysis. It uses single-, and multi-word concepts from the SenticNet sentiment lexicon as the source of sentiment information for the purpose of sentiment classification.<br><p>In 6 chapters the book sheds lig
Multi-Modal Sentiment Analysis
β Scribed by Hua Xu
- Publisher
- Springer
- Year
- 2023
- Tongue
- English
- Leaves
- 278
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
The natural interaction ability between human and machine mainly involves human-machine dialogue ability, multi-modal sentiment analysis ability, human-machine cooperation ability, and so on. To enable intelligent computers to have multi-modal sentiment analysis ability, it is necessary to equip them with a strong multi-modal sentiment analysis ability during the process of human-computer interaction. This is one of the key technologies for efficient and intelligent human-computer interaction. This book focuses on the research and practical applications of multi-modal sentiment analysis for human-computer natural interaction, particularly in the areas of multi-modal information feature representation, feature fusion, and sentiment classification. Multi-modal sentiment analysis for natural interaction is a comprehensive research field that involves the integration of natural language processing, computer vision, machine learning, pattern recognition, algorithm, robot intelligent system, human-computer interaction, etc. Currently, research on multi-modal sentiment analysis in natural interaction is developing rapidly. This book can be used as a professional textbook in the fields of natural interaction, intelligent question answering (customer service), natural language processing, human-computer interaction, etc. It can also serve as an important reference book for the development of systems and products in intelligent robots, natural language processing, human-computer interaction, and related fields.
β¦ Table of Contents
Preface
Contents
About the Author
List of Figures
List of Tables
Chapter 1: Overview
1.1 Overview of Multimodal Sentiment Analysis
1.1.1 Overview of Research on Multimodal Sentiment Analysis
1.1.2 Overview of Related Research on Modality Loss
1.1.3 Conclusion
1.2 Overview of Multimodal Machine Learning
1.2.1 Overview of Multimodal Representation Learning
Associative Representation Learning
Collaborative Representation Learning
1.2.2 Overview of Multimodal Representation Fusion
Prefusion
Mid-Fusion
Postfusion
End-Fusion
1.2.3 Conclusion
1.3 Overview of Multitask Learning Mechanisms
1.3.1 Multitasking Architecture in Computer Vision
1.3.2 Multitasking Architecture in Natural Language Processing
1.3.3 Multitasking Architecture in Multimodal Learning
1.3.4 Conclusion
1.4 Summary
References
Chapter 2: Multimodal Sentiment Analysis Data Sets and Preprocessing
2.1 Multimodal Sentiment Analysis Datasets
2.1.1 Introduction
2.1.2 CMU-MOSI
2.1.3 CMU-MOSEI
2.1.4 IEMOCAP
2.1.5 MELD
2.1.6 Conclusion
2.2 Multimodal Sentiment Analysis Dataset with Multilabel
2.2.1 Introduction
2.2.2 CH-SIMS Dataset
Data Collection
Annotation
Extracted Features
2.2.3 Multimodal Multitask Learning Framework
Unimodal Subnets
Feature Fusion Network
Optimization Objectives
2.2.4 Experiments
Baselines
Experimental Details
Results and Discussion
2.2.5 Conclusion
2.3 An Extension and Enhancement of the CH-SIMS Dataset
2.3.1 Introduction
2.3.2 CH-SIMS V2.0 Dataset
Data Collection
Data Annotation
2.3.3 Feature Extraction
2.3.4 Acoustic Visual Mixup Consistent (AV-MC) Framework
2.3.5 Experiments
Benchmark Results on CH-SIMS v2.0
Case Study
2.3.6 Conclusion
2.4 Summary
References
Chapter 3: Early Unimodal Sentiment Analysis of Comment Text Based on Traditional Machine Learning
3.1 Identifying Evaluative Sentences in Online Discussions
3.1.1 Introduction
3.1.2 The Proposed Technique
Extraction of Aspects and Expansion of Evaluation and Emotion Lexicons
Aspects, Evaluation Words, and Emotion Words Interaction
Classification
3.1.3 Experiments
Methods and Settings
Evaluation Results
Influence of the Parameters
3.1.4 Conclusion
3.2 Grouping Product Features Using Semisupervised Learning with Soft-Constraints
3.2.1 Introduction
3.2.2 The Proposed Algorithm
Semisupervised Learning Using EM
Proposed Soft-Constrained EM
3.2.3 Generating SL Using Constraints
3.2.4 Distributional Context Extraction
3.2.5 Experiments
Review Data Sets and Gold Standards
Evaluation Measures
Baseline Methods and Settings
Evaluation Results
Varying the Context Window Size
3.2.6 Conclusion
3.3 Constrained LDA for Grouping Product Features in Opinion Mining
3.3.1 Introduction
3.3.2 The Proposed Algorithm
Introduction to LDA
Constrained-LDA
3.3.3 Constraint Extraction
Must-link
Cannot-link
3.3.4 Experiments
Data Sets
Gold Standard
Evaluation Measure
Compared with LDA
Comparing with mLSA
Influence of Parameters
3.3.5 Conclusion
3.4 Product Feature Grouping for Opinion Mining
3.4.1 Introduction
3.4.2 The Proposed Soft-Constrained Algorithm
3.4.3 Extracting the Example Set Using Constraints
3.4.4 Distributional Context Extraction
3.4.5 Experiments
Evaluation Results
3.4.6 Conclusion
3.5 Exploiting Effective Features for Chinese Sentiment Classification
3.5.1 Introduction
3.5.2 Methodology
Feature Extraction
Term Weighting
Training and Classifying
3.5.3 Experimental Setup
Data Sets
Evaluation Metrics
3.5.4 Experimental Results
Performances of N-Gram-Based Features
Performances of Substring-Based Features
Comparison
3.5.5 Conclusion
3.6 An Empirical Study of Unsupervised Sentiment Classification of Chinese Reviews
3.6.1 Introduction
3.6.2 Proposed Technique
SNW Identification
Sentiment Polarity Computation
3.6.3 Empirical Evaluation
Datasets
Evaluation Measures
Impact of the SNW
Domain-Dependent Characteristics of SNW
Influences of the Sentiment LexiconsΒ΄ Scale
3.6.4 Conclusion
3.7 Feature Subsumption for Sentiment Classification in Multiple Languages
3.7.1 Introduction
3.7.2 The Proposed Algorithm
Substring-Group Feature Extracting
Term Weighting
Feature Selecting
Classifying
3.7.3 Experimental Setup
Datasets
Evaluation Metrics
3.7.4 Experiments
Comparisons
Multilingual Characteristics
Feature Frequency Versus Feature Presence
Influence of Feature Selecting
Transductive Learning Vs. Inductive Learning
3.7.5 Conclusion
3.8 Summary
References
Chapter 4: Unimodal Sentiment Analysis
4.1 Text Sentiment Analysis Based on Word2vec and SVMperf
4.1.1 Introduction
4.1.2 Methodology
Similar Features Clustering
Sentiment Classification
4.1.3 Experiments
Data Sets
Evaluation Criteria
Experimental Results
4.1.4 Conclusion
4.2 Contextual Heterogeneous Feature Fusion Framework for Audio Sentiment Analysis
4.2.1 Introduction
4.2.2 Proposed Method
Context-Independent Feature Extraction
Context-Dependent Representation Learning
4.2.3 Experiments
Datasets
Baseline Models
Experimental Setup
Experimental Results
4.2.4 Conclusion
4.2.5 Introduction
4.2.6 Methodology
Coattentive Multitask Convolutional Neural Network
Coattentive Multitask Convolutional Neural Network
Spatial Coattention Module
Multitask Loss
Methodology
Benchmark Databases
Multitask baselines
Data Preprocessing
Experimental Details
4.2.7 Experiments
Comparisons with Multitask Methods
Comparisons with State-of-the-Arts
Transfer Validation
Feature Visualization
Time Cost Analysis
4.2.8 Conclusion
4.3 Summary
References
Chapter 5: Cross-Modal Sentiment Analysis
5.1 The Acoustic Visual Mixup Consistent (AV-MC) Framework
5.1.1 Introduction
5.1.2 Multimodal Sentiment Analysis (MSA) Background
Multimodal Dataset Construction
Modality Feature Extraction
Multimodal Fusion
Sentiment Prediction
5.1.3 Automatic Sentiment Computing Approach with Modality Mixup Strategy
Sentiment Prediction
The Training Process of Automatic Sentiment Computing Approach
5.1.4 Experiments
Dataset
Feature Extraction
Metrics
Baselines
Supervised Sentiment Analysis
Semisupervised Sentiment Analysis
5.1.5 Conclusion
5.2 Cross-Modal Sentiment Recognition Based on Hierarchical Grained and Acoustic Features
5.2.1 Introduction
5.2.2 Problem Definition
5.2.3 Methodology
5.2.4 Experiments
Datasets
Compared Baselines
Experimental Results
5.2.5 Conclusion
5.3 Cross-Modal Sentiment Classification for Alignment Sequences
5.3.1 Introduction
5.3.2 Methodology
Problem Definition
CM-BERT: Cross-Modal BERT
Masked Multimodal Attention
5.3.3 Experiments
Datasets and Experimental Settings
Audio Features and Multimodal Alignment
Evaluation Metrics
Baselines
Results and Discussion
Visualization of the Masked Multimodal Attention
5.3.4 Conclusion
5.4 Summary
References
Chapter 6: Multimodal Sentiment Analysis
6.1 Multimodal Sentiment Analysis Model Based on Self-Supervised Multitask Learning
6.1.1 Introduction
6.1.2 Methodology
Task Setup
Multimodal Task
Unimodal Task
ULGM
Relative Distance Value
Shifting Value
Momentum-based Update Policy
Optimization Objectives
6.1.3 Experiments
Datasets
Baselines
Basic Settings
Results and Analysis
6.1.4 Conclusion
6.2 Multimodal Sentiment Analysis Method Based on Modality Missing
6.2.1 Introduction
6.2.2 Methodology
Task Setup
Modality Feature Extraction Module
Modality Reconstruction Module
Fusion Module
Model Training
6.2.3 Experiments
Datasets
Feature Extraction
Baselines
Experimental Settings
Evaluation Metrics
Results and Discussion
6.2.4 Conclusion
6.3 Summary
References
Chapter 7: Multimodal Sentiment Analysis Platform and Application
7.1 An Integrated Platform for Multimodal Sentiment Analysis
7.1.1 Introduction
7.1.2 Platform Architecture
Data Management Module
Feature Extraction Module
Model Training Module
Result Analysis Module
7.1.3 Experiments
Feature Selection Comparison
MSA Benchmark Comparison
7.1.4 Model Analysis Demonstration
Intermediate Result Analysis
On-the-Fly Instance Analysis
Generalization Ability Analysis
7.1.5 Conclusion
7.2 Robust Multimodal Sentiment Analysis Platform
7.2.1 Introduction
7.2.2 Demonstrating Robust-MSA
Noise Generation
Noise Defense Methods
End-to-End MSA Pipeline
Noise Influence Demonstration
7.2.3 Engaging the Audience
7.2.4 Conclusion
7.3 Summary
References
Appendix
Symbol Cross-Reference Table
Code Link Table
π SIMILAR VOLUMES
<p>The imaging of moving organs such as the heart, in particular, is a real challenge because of its movement. This book presents current and emerging methods developed for the acquisition of images of moving organs in the five main medical imaging modalities: conventional X-rays, computed tomograph
Sentiment analysis is the computational study of people's opinions, sentiments, emotions, moods, and attitudes. This fascinating problem offers numerous research challenges, but promises insight useful to anyone interested in opinion analysis and social media analysis. This comprehensive introductio
Sentiment analysis is the computational study of people's opinions, sentiments, emotions, and attitudes. This fascinating problem is increasingly important in business and society. It offers numerous research challenges but promises insight useful to anyone interested in opinion analysis and social
Overview: Sentiment analysis is the computational study of people's opinions, sentiments, emotions, and attitudes. This fascinating problem is increasingly important in business and society. It offers numerous research challenges but promises insight useful to anyone interested in opinion analysis a
<P>The growing mobility needs of travellers have led to the development of increasingly complex and integrated multi-modal transit networks. Hence, transport agencies and transit operators are now more urgently required to assist in the challenging task of effectively and efficiently planning, manag