<p><b>Empirical methods are means to answering methodological questions of empirical sciences by statistical techniques</b>. The methodological questions addressed in this book include the problems of validity, reliability, and significance. In the case of machine learning, these correspond to the q
Validity, Reliability, and Significance: Empirical Methods for NLP and Data Science (Synthesis Lectures on Human Language Technologies)
β Scribed by Stefan Riezler, Michael Hagmann
- Publisher
- Springer; Second Edition 2024
- Year
- 2024
- Tongue
- English
- Leaves
- 179
- Edition
- 2
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
This book introduces empirical methods for machine learning with a special focus on applications in natural language processing (NLP) and data science. The authors present problems of validity, reliability, and significance and provide common solutions based on statistical methodology to solve them. The book focuses on model-based empirical methods where data annotations and model predictions are treated as training data for interpretable probabilistic models from the well-understood families of generalized additive models (GAMs) and linear mixed effects models (LMEMs). Based on the interpretable parameters of the trained GAMs or LMEMs, the book presents model-based statistical tests such as a validity test that allows for the detection of circular features that circumvent learning. Furthermore, the book discusses a reliability coefficient using variance decomposition based on random effect parameters of LMEMs. Lastly, a significance test based on the likelihood ratios of nested LMEMs trained on the performance scores of two machine learning models is shown to naturally allow the inclusion of variations in meta-parameter settings into hypothesis testing, and further facilitates a refined system comparison conditional on properties of input data. The book is self-contained with an appendix on the mathematical background of generalized additive models and linear mixed effects models as well as an accompanying webpage with the related R and Python code to replicate the presented experiments. The second edition also features a new hands-on chapter that illustrates how to use the included tools in practical applications.
β¦ Table of Contents
Preface toΒ theΒ Second Edition
References
Preface to the First Edition
References
Acknowledgments
Contents
About theΒ Authors
1 Introduction
1.1 Empirical Methods in Machine Learning
1.2 Scope and Outline of This Book
1.3 Intended Readership
2 Validity
2.1 Validity Problems in NLP and Data Science
2.1.1 Bias Features
2.1.2 Illegitimate Features
2.1.3 Circular Features
2.2 Theories of Measurement and Validity
2.2.1 The Concept of Validity in Psychometrics
2.2.2 The Theory of Scales of Measurement
2.2.3 Theories of Measurement in Philosophy of Science
2.3 Prediction as Measurement
2.3.1 Feature Representations
2.3.2 Measurement Data
2.4 Descriptive and Model-Based Validity Tests
2.4.1 Dataset Bias Test
2.4.2 Transformation Invariance Test
2.4.3 A Model-Based Test for Circularity
2.5 Notes on Practical Usage
3 Reliability
3.1 Untangling Terminology: Reliability, Agreement, and Others
3.2 Performance Evaluation as Measurement
3.3 Descriptive and Model-Based Reliability Tests
3.3.1 Agreement Coefficients for Data Annotation
3.3.2 Bootstrap Confidence Intervals for Model Evaluation
3.3.3 Model-Based Reliability Testing
3.4 Notes on Practical Usage
4 Significance
4.1 Parametric Significance Tests
4.2 Sampling-Based Significance Tests
4.2.1 Bootstrap Resampling
4.2.2 Permutation Tests
4.3 Model-Based Significance Testing
4.3.1 The Generalized Likelihood Ratio Test
4.3.2 Likelihood Ratio Tests Using LMEMs
4.4 Notes on Practical Usage
5 Analyzing Inferential Reproducibility
5.1 Towards Inferential Reproducibility
5.2 A Scheme for Analyzing Inferential Reproducibility
5.3 Inferential Reproducibility of Fine-Tuning Large Language Models
5.4 Replicating the Reproducibility Study
5.5 Discussion
5.6 Notes on Practical Usage
Mathematical Background
A.1 Generalized Additive Models
A.1.1 General Form of Model
A.1.2 Example
A.1.3 Parameter Estimation
A.2 Linear Mixed Effects Models
A.2.1 General Form of Model
A.2.2 Example
A.2.3 Parameter Optimization
A.3 The Distribution of the Likelihood Ratio Statistic
A.3.1 Score Function and Fisher Information
A.3.2 Taylor Expansion and Asymptotic Distribution
References
π SIMILAR VOLUMES
<p><b>The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query. Although the most common formulation of text ranking is search, instances of the task can also be found in many natural language processing (NLP) applications</b>. This book provide
Search for information is no longer exclusively limited within the native language of the user, but is more and more extended to other languages. This gives rise to the problem of cross-language information retrieval (CLIR), whose goal is to find relevant information written in a different langua
This book is aimed at providing an overview of several aspects of semantic role labeling. Chapter 1 begins with linguistic background on the definition of semantic roles and the controversies surrounding them. Chapter 2 describes how the theories have led to structured lexicons such as FrameNet, Ver
A major part of natural language processing now depends on the use of text data to build linguistic analyzers. We consider statistical, computational approaches to modeling linguistic structure. We seek to unify across many approaches and many kinds of linguistic structures. Assuming a basic underst
Linguistic annotation and text analytics are active areas of research and development, with academic conferences and industry events such as the Linguistic Annotation Workshops and the annual Text Analytics Summits. This book provides a basic introduction to both fields, and aims to show that good l