<p><p>Data matching (also known as record or data linkage, entity resolution, object identification, or field matching) is the task of identifying, matching and merging records that correspond to the same entities from several databases or even within one database. Based on research in various domai
Data matching: concepts and techniques for record linkage, entity resolution, and duplicate detection
β Scribed by Peter Christen (auth.)
- Publisher
- Springer-Verlag Berlin Heidelberg
- Year
- 2012
- Tongue
- English
- Leaves
- 279
- Series
- Data-centric systems and applications
- Edition
- 1
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
Data matching (also known as record or data linkage, entity resolution, object identification, or field matching) is the task of identifying, matching and merging records that correspond to the same entities from several databases or even within one database. Based on research in various domains including applied statistics, health informatics, data mining, machine learning, artificial intelligence, database management, and digital libraries, significant advances have been achieved over the last decade in all aspects of the data matching process, especially on how to improve the accuracy of data matching, and its scalability to large databases.
Peter Christenβs book is divided into three parts: Part I, βOverviewβ, introduces the subject by presenting several sample applications and their special challenges, as well as a general overview of a generic data matching process. Part II, βSteps of the Data Matching Processβ, then details its main steps like pre-processing, indexing, field and record comparison, classification, and quality evaluation. Lastly, part III, βFurther Topicsβ, deals with specific aspects like privacy, real-time matching, or matching unstructured data. Finally, it briefly describes the main features of many research and open source systems available today.
By providing the reader with a broad range of data matching concepts and techniques and touching on all aspects of the data matching process, this book helps researchers as well as students specializing in data quality or data matching aspects to familiarize themselves with recent research advances and to identify open research challenges in the area of data matching. To this end, each chapter of the book includes a final section that provides pointers to further background and research material. Practitioners will better understand the current state of the art in data matching as well as the internal workings and limitations of current systems. Especially, they will learn that it is often not feasible to simply implement an existing off-the-shelf data matching system without substantial adaption and customization. Such practical considerations are discussed for each of the major steps in the data matching process.β¦ Table of Contents
Front Matter....Pages i-xix
Front Matter....Pages 1-1
Introduction....Pages 3-22
The Data Matching Process....Pages 23-35
Front Matter....Pages 37-37
Data Pre-Processing....Pages 39-67
Indexing....Pages 69-100
Field and Record Comparison....Pages 101-127
Classification....Pages 129-162
Evaluation of Matching Quality and Complexity....Pages 163-184
Front Matter....Pages 185-185
Privacy Aspects of Data Matching....Pages 187-207
Further Topics and Research Directions....Pages 209-228
Data Matching Systems....Pages 229-242
Back Matter....Pages 243-270
β¦ Subjects
Database Management; Data Mining and Knowledge Discovery; Information Storage and Retrieval; Artificial Intelligence (incl. Robotics); Pattern Recognition
π SIMILAR VOLUMES
<P>This book helps practitioners gain a deeper understanding, at an applied level, of the issues involved in improving data quality through editing, imputation, and record linkage. The first part of the book deals with methods and models. Here, we focus on the Fellegi-Holt edit-imputation model, the
<p><P>This book helps practitioners gain a deeper understanding, at an applied level, of the issues involved in improving data quality through editing, imputation, and record linkage. The first part of the book deals with methods and models. Here, we focus on the Fellegi-Holt edit-imputation model,
<p><span>This book helps practitioners gain a working understanding of issues involved in improving data quality through editing, imputation, and record linkage. The first part of the book deals with methods and models, focusing on the Fellegi-Holt edit-imputation model, the Little-Rubin multiple-im
Large data sets arriving at every increasing speeds require a new set of efficient data analysis techniques. Data analytics are becoming an essential component for every organization and technologies such as health care, financial trading, Internet of Things, Smart Cities or Cyber Physical Systems.
Large data sets arriving at every increasing speeds require a new set of efficient data analysis techniques. Data analytics are becoming an essential component for every organization and technologies such as health care, financial trading, Internet of Things, Smart Cities or Cyber Physical Systems.