Error Correction Coding: Mathematical Methods And Algorithms
โ Scribed by Todd K. Moon
- Publisher
- Wiley-Interscience
- Year
- 2005
- Tongue
- English
- Leaves
- 803
- Edition
- 1
- Category
- Library
No coin nor oath required. For personal study only.
โฆ Synopsis
An unparalleled learning tool and guide to error correction coding
Error correction coding techniques allow the detection and correction of errors occurring during the transmission of data in digital communication systems. These techniques are nearly universally employed in modern communication systems, and are thus an important component of the modern information economy.
Error Correction Coding: Mathematical Methods and Algorithms provides a comprehensive introduction to both the theoretical and practical aspects of error correction coding, with a presentation suitable for a wide variety of audiences, including graduate students in electrical engineering, mathematics, or computer science. The pedagogy is arranged so that the mathematical concepts are presented incrementally, followed immediately by applications to coding. A large number of exercises expand and deepen students' understanding. A unique feature of the book is a set of programming laboratories, supplemented with over 250 programs and functions on an associated Web site, which provides hands-on experience and a better understanding of the material. These laboratories lead students through the implementation and evaluation of Hamming codes, CRC codes, BCH and R-S codes, convolutional codes, turbo codes, and LDPC codes.
This text offers both "classical" coding theory-such as Hamming, BCH, Reed-Solomon, Reed-Muller, and convolutional codes-as well as modern codes and decoding methods, including turbo codes, LDPC codes, repeat-accumulate codes, space time codes, factor graphs, soft-decision decoding, Guruswami-Sudan decoding, EXIT charts, and iterative decoding. Theoretical complements on performance and bounds are presented. Coding is also put into its communications and information theoretic context and connections are drawn to public key cryptosystems.
Ideal as a classroom resource and a professional reference, this thorough guide will benefit electrical and computer engineers, mathematicians, students, researchers, and scientists.
โฆ Table of Contents
Error Correction Coding Mathematical Methods and Algorithms
Contents
Preface
List of Program Files
List of Laboratory Exercises
List of Algorithms
List of Figures
List of Tables
List of Boxes
Part I Introduction and Foundations
1 A Context for Error Correction Coding
1.1 Purpose of This Book
1.2 Introduction: Where Are Codes?
1.3 The Communications System
1.4 Basic Digital Communications
1.4.1 Binary Phase-Shift Keying
1.4.2 More General Digital Modulation
1.5 Signal Detection
1.5.1 The Gaussian Channel
1.5.2 MAP and ML Detection
1.5.3 Special Case: Binary Detection
1.5.4 Probability of Error for Binary Detection
1.5.5 Bounds on Performance: The Union Bound
1.5.6 The Binary Symmetric Channel
1.5.7 The BSC and the Gaussian Channel Model
1.6 Memoryless Channels
1.7 Simulation and Energy Considerations for Coded Signals
1.8 Some Important Definitions
1.8.1 Detection of Repetition Codes Over a BSC
1.8.2 Soft-Decision Decoding of Repetition Codes Over the AWGN
1.8.3 Simulation of Results
1.8.4 Summary
1.9 Hamming Codes
1.9.1 Hard-Input Decoding Hamming Codes
1.9.2 Other Representations of the Hamming Code
An Algebraic Representation
A Polynomial Representation
A Trellis Representation
The Tanner Graph Representation
1.10 The Basic Questions
1.11 Historical Milestones of Coding Theory
1.12 A Bit of Information Theory
1.12.1 Definitions for Discrete Random Variables
Entropy and Conditional Entropy
Relative Entropy, Mutual Information, and Channel Capacity
1.12.2 Definitions for Continuous Random Variables
1.12.3 The Channel Coding Theorem
1.12.4 โProofโ of the Channel Coding Theorem
1.12.5 Capacity for the Continuous-Time AWGN Channel
1.12.6 Transmission at Capacity with Errors
1.12.7 The Implication of the Channel Coding Theorem
Lab 1 Simulating a Communications Channel
Objective
Background
Use of Coding in Conjunction with the BSC
Assignment
Programming Part
Resources and Implementation Suggestions
1.13 Exercises
1.14 References
Part II Block Codes
2 Groups and Vector Spaces
2.1 Introduction
2.2 Groups
2.2.1 Subgroups
2.2.2 Cyclic Groups and the Order of an Element
2.2.3 Cosets
2.2.4 Lagrangeโs Theorem
2.2.5 Induced Operations; Isomorphism
2.2.6 Homomorphism
2.3 Fields: A Prelude
2.4 Review of Linear Algebra
2.5 Exercises
2.6 References
3 Linear Block Codes
3.1 Basic Definitions
3.2 The Generator Matrix Description of Linear Block Codes
3.2.1 Rudimentary Implementation
3.3 The Parity Check Matrix and Dual Codes
3.3.1 Some Simple Bounds on Block Codes
3.4 Error Detection and Correction over Hard-Input Channels
3.4.1 Error Detection
3.4.2 Error Correction: The Standard Array
3.5 Weight Distributions of Codes and Their Duals
3.6 Hamming Codes and Their Duals
3.7 Performance of Linear Codes
3.7.1 Error detection performance
3.7.2 Error Correction Performance
3.7.3 Performance for Soft-Decision Decoding
3.8 Erasure Decoding
3.8.1 Binary Erasure Decoding
3.9 Modifications to Linear Codes
3.10 Best Known Linear Block Codes
3.11 Exercises
3.12 References
4 Cyclic Codes, Rings, and Polynomials
4.1 Introduction
4.2 Basic Definitions
4.3 Rings
4.3.1 Rings of Polynomials
4.4 Quotient Rings
4.5 Ideals in Rings
4.6 Algebraic Description of Cyclic Codes
4.7 Nonsystematic Encoding and Parity Check
4.8 Systematic Encoding
4.9 Some Hardware Background
4.9.1 Computational Building Blocks
4.9.2 Sequences and Power series
4.9.3 Polynomial Multiplication
Last-Element-First Processing
First-Element-First Processing
4.9.4 Polynomial division
Last-Element-First Processing
4.9.5 Simultaneous Polynomial Division and Multiplication
First-Element-First Processing
4.10 Cyclic Encoding
4.11 Syndrome Decoding
4.12 Shortened Cyclic Codes
Method 1: Simulating the Extra Clock Shifts
Method 2: Changing the Error Pattern Detection Circuit
4.13 Binary CRC Codes
4.13.1 Byte-Oriented Encoding and Decoding Algorithms
4.13.2 CRC Protecting Data Files or Data Packets
Appendix 4.A Linear Feedback Shift Registers
Appendix 4.A.1 Basic Concepts
Appendix 4.A.2 Connection With Polynomial Division
Appendix 4.A.3 Some Algebraic Properties of Shift Sequences
Lab 2 Polynomial Division and Linear Feedback Shift Registers
Objective
Preliminary Exercises
Programming Part: BinLFSR
Resources and Implementation Suggestions
Programming Part: BinPolyDiv
Follow-On Ideas and Problems
Lab 3 CRC Encoding and Decoding
Objective
Preliminary
Programming Part
Resources and Implementation Suggestions
4.14 Exercises
4.15 References
5 Rudiments of Number Theory and Algebra
5.1 Motivation
5.2 Number Theoretic Preliminaries
5.2.1 Divisibility
5.2.2 The Euclidean Algorithm and Euclidean Domains
5.2.3 The Sugiyama Algorithm
5.2.4 Congruence
5.2.5 The รธ Function
5.2.6 Some Cryptographic Payoff
Fermat's Little Theorem
RSA Encryption
5.3 The Chinese Remainder Theorem
5.3.1 The CRT and Interpolation
The Evaluation Homomorphism
The Interpolation Problem
5.4 Fields
5.4.1 An Examination of R and C
5.4.2 Galois Field Construction: An Example
5.4.3 Connection with Linear Feedback Shift Registers
5.5 Galois Fields: Mathematical Facts
5.6 Implementing Galois Field Arithmetic
5.6.1 Zech Logarithms
5.6.2 Hardware Implementations
5.7 Subfields of Galois Fields
5.8 Irreducible and Primitive polynomials
5.9 Conjugate Elements and Minimal Polynomials
5.9.1 Minimal Polynomials
5.10 Factoring xn 1
5.11 Cyclotomic Cosets
Appendix 5.A How Many Irreducible Polynomials Are There?
Appendix 5.A.1 Solving for Im Explicitly: The Moebius Function
Lab 4 Programming the Euclidean Algorithm
Objective
Preliminary Exercises
Background
Programming Part
Lab 5 Programming Galois Field Arithmetic
Objective
Preliminary Exercises
Programming Part
5.12 Exercises
5.13 References
6 BCH and Reed-Solomon Codes: Designer Cyclic Codes
6.1 BCH Codes
6.1.1 Designing BCH Codes
6.1.2 The BCH Bound
6.1.3 Weight Distributions for Some Binary BCH Codes
6.1.4 Asymptotic Results for BCH Codes
6.2 Reed-Solomon Codes
6.2.1 Reed-Solomon Construction 1
6.2.2 Reed-Solomon Construction 2
6.2.3 Encoding Reed-Solomon Codes
6.2.4 MDS Codes and Weight Distributions for RS Codes
6.3 Decoding BCH and RS Codes: The General Outline
6.3.1 Computation of the Syndrome
6.3.2 The Error Locator Polynomial
6.3.3 Chien Search
6.4 Finding the Error Locator Polynomial
6.4.1 Simplifications for Binary Codes and Petersonโs Algorithm
6.4.2 Berlekamp-Massey Algorithm
6.4.3 Characterization of LFSR Length in Masseyโs Algorithm
6.4.4 Simplifications for Binary Codes
6.5 Non-Binary BCH and RS Decoding
6.5.1 Forneyโs Algorithm
6.6 Euclidean Algorithm for the Error Locator Polynomial
6.7 Erasure Decoding for Nonbinary BCH or RS codes
6.8 Galois Field Fourier Transform Methods
6.8.1 Equivalence of the Two Reed-Solomon Code Constructions
6.8.2 Frequency-Domain Decoding
6.9 Variations and Extensions of Reed-Solomon Codes
6.9.1 Simple Modifications
6.9.2 Generalized Reed-Solomon Codes and Alternant Codes
6.9.3 Goppa Codes
6.9.4 Decoding Alternant Codes
6.9.5 The McEliece Public Key Cryptosystem
Lab 6 Programming the Berlekamp-Massey Algorithm
Background
Assignment
Preliminary Exercises
Programming Part
Resources and Implementation Suggestions
Lab 7 Programming the BCH Decoder
Objective
Preliminary Exercises
Programming Part
Resources and Implementation Suggestions
Follow-On Ideas and Problems
Lab 8 Reed-Solomon Encoding and Decoding
Objective
Background
Programming Part
Appendix 6.A Proof of Newtonโs Identities
6.10 Exercises
6.11 References
7 Alternate Decoding Algorithms for Reed-Solomon Codes
7.1 Introduction: Workload for Reed-Solomon Decoding
7.2 Derivations of Welch-Berlekamp Key Equation
7.2.1 The Welch-Berlekamp Derivation of the WB Key Equation
7.2.2 Derivation From the Conventional Key Equation
7.3 Finding the Error Values
7.4 Methods of Solving the WB Key Equation
7.4.1 Background: Modules
7.4.2 The Welch-Berlekamp Algorithm
7.4.3 Modular Solution of the WB Key Equation
7.5 Erasure Decoding with the Welch-Berlekamp Key Equation
7.6 The Guruswami-Sudan Decoding Algorithm and Soft RS Decoding
7.6.1 Bounded Distance, ML, and List Decoding
7.6.2 Error Correction by Interpolation
7.6.3 Polynomials in Two Variables
Degree and Monomial Order
Zeros and Multiple Zeros
7.6.4 The GS Decoder: The Main Theorems
The Interpolation Theorem
The Factorization Theorem
The Correction Distance
The Number of Polynomials in the Decoding List
7.6.5 Algorithms for Computing the Interpolation Step
Finding Linearly Dependent Columns: The Feng-Tzeng Algorithm
Finding the Intersection of Kernels: The Kötter Algorithm
7.6.6 A Special Case: m = 1 and L = 1
7.6.7 The Roth-Ruckenstein Algorithm
What to Do with Lists of Factors?
7.6.8 Soft-Decision Decoding of Reed-Solomon Codes
Notation
A Factorization Theorem
Mapping from Reliability to Multiplicity
The Geometry of the Decoding Regions
Computing the Reliability Matrix
7.7 Exercises
7.8 References
8 Other Important Block Codes
8.1 Introduction
8.2 Hadamard Matrices, Codes, and Transforms
8.2.1 Introduction to Hadamard Matrices
8.2.2 The Paley Construction of Hadamard Matrices
8.2.3 Hadamard Codes
8.3 Reed-Muller Codes
8.3.1 Boolean Functions
8.3.2 Definition of the Reed-Muller Codes
8.3.3 Encoding and Decoding Algorithms for First-Order RM Codes
Encoding RM (1, m) Codes
Decoding RM (1, m) Codes
Expediting Decoding Using the Fast Hadamard Transform
8.3.4 The Reed Decoding Algorithm for RM (r, m) Codes, r ≧ 1
Details for an RM (2, 4) Code
A Geometric Viewpoint
8.3.5 Other Constructions of Reed-Muller Codes
8.4 Building Long Codes from Short Codes: The Squaring Construction
8.5 Quadratic Residue Codes
8.6 Golay Codes
8.6.1 Decoding the Golay Code
Algebraic Decoding of the g23 Golay Code
Arithmetic Decoding of the g24 Code
8.7 Exercises
8.8 References
9 Bounds on Codes
9.1 The Gilbert-Varshamov Bound
9.2 The Plotkin Bound
9.3 The Griesmer Bound
9.4 The Linear Programming and Related Bounds
9.4.1 Krawtchouk Polynomials
9.4.2 Character
9.4.3 Krawtchouk Polynomials and Characters
9.5 The McEliece-Rodemich-Rumsey-Welch Bound
9.6 Exercises
9.7 References
10 Bursty Channels, Interleavers, and Concatenation
10.1 Introduction to Bursty Channels
10.2 Interleavers
10.3 An Application of Interleaved RS Codes: Compact Discs
10.4 Product Codes
10.5 Reed-Solomon Codes
10.6 Concatenated Codes
10.7 Fire Codes
10.7.1 Fire Code Definition
10.7.2 Decoding Fire Codes: Error Trapping Decoding
10.8 Exercises
10.9 References
11 Soft-Decision Decoding Algorithms
11.1 Introduction and General Notation
11.2 Generalized Minimum Distance Decoding
11.2.1 Distance Measures and Properties
11.3 The Chase Decoding Algorithms
11.4 Halting the Search: An Optimality Condition
11.5 Ordered Statistic Decoding
11.6 Exercises
11.7 References
Part III Codes on Graphs
12 Convolutional Codes
12.1 Introduction and Basic Notation
12.1.1 The State
12.2 Definition of Codes and Equivalent Codes
12.2.1 Catastrophic Encoders
12.2.2 Polynomial and Rational Encoders
12.2.3 Constraint Length and Minimal Encoders
12.2.4 Systematic Encoders
12.3 Decoding Convolutional Codes
12.3.1 Introduction and Notation
12.3.2 The Viterbi Algorithm
12.3.3 Some Implementation Issues
The Basic Operation: Add-Compare-Select
Decoding Streams of Data: Windows on the Trellis
Output Decisions
Hard and Soft Decoding; Quantization
Synchronization Issues
12.4 Some Performance Results
12.5 Error Analysis for Convolutional Codes
12.5.1 Enumerating Paths Through the Trellis
Enumerating on More Complicated Graphs: Masonโs Rule
12.5.2 Characterizing the Node Error Probability Pe and the Bit Error Rate Pb
12.5.3 A Bound on Pd for Discrete Channels
Performance Bound on the BSC
12.5.4 A Bound on Pd for BPSK Signaling Over the AWGN Channel
12.5.5 Asymptotic Coding Gain
12.6 Tables of Good Codes
12.7 Puncturing
12.7.1 Puncturing to Achieve Variable Rate
12.8 Suboptimal Decoding Algorithms for Convolutional Codes
12.8.1 Tree Representations
12.8.2 The Fano Metric
12.8.3 The Stack Algorithm
12.8.4 The Fano Algorithm
12.8.5 Other Issues for Sequential Decoding
12.8.6 A Variation on the Viterbi Algorithm: The M Algorithm
12.9 Convolutional Codes as Block Codes
12.10 Trellis Representations of Block and Cyclic Codes
12.10.1 Block Codes
12.10.2 Cyclic Codes
12.10.3 Trellis Decoding of Block Codes
Lab 9 Programming Convolutional Encoders
Objective
Background
Programming Part
Lab 10 Convolutional Decoders: The Viterbi Algorithm
Objective
Background
Programming Part
12.11 Exercises
12.12 References
13 Trellis Coded Modulation
13.1 Adding Redundancy by Adding Signals
13.2 Background on Signal Constellations
13.3 TCM Example
13.3.1 The General Ungerboeck Coding Framework
13.3.2 The Set Partitioning Idea
13.4 Some Error Analysis for TCM Codes
13.4.1 General Considerations
13.4.2 A Description of the Error Events
13.4.3 Known Good TCM Codes
13.5 Decoding TCM Codes
13.6 Rotational Invariance
Differential Encoding
Constellation Labels and Partitions
13.7 Multidimensional TCM
13.7.1 Some Advantages of Multidimensional TCM
13.7.2 Lattices and Sublattices
Basic Definitions
Common Lattices
Sublattices and Cosets
The Lattice Code Idea
Sources of Coding Gain in Lattice Codes
Some Good Lattice Codes
13.8 The V.34 Modem Standard
Lab 11 Trellis-Coded Modulation Encoding and Decoding
Objective
Background
Programming Part
13.9 Exercises
13.10 References
Part IV Iteratively Decoded Codes
14 Turbo Codes
14.1 Introduction
14.2 Encoding Parallel Concatenated Codes
14.3 Turbo Decoding Algorithms
14.3.1 The MAP Decoding Algorithm
14.3.2 Notation
14.3.3 Posterior Probability
14.3.4 Computing αt and βt
14.3.5 Computing γr
14.3.6 Normalization
14.3.7 Summary of the BCJR Algorithm
14.3.8 A Matrix/Vector Formulation
14.3.9 Comparison of the Viterbi and BCJR Algorithms
14.3.10 The BCJR Algorithm for Systematic Codes
14.3.11 Turbo Decoding Using the BCJR Algorithm
The Terminal State of the Encoders
14.3.12 Likelihood Ratio Decoding
Log Prior Ratio λp, t
Log Posterior λs, t
14.3.13 Statement of the Turbo Decoding Algorithm
14.3.14 Turbo Decoding Stopping Criteria
The Cross Entropy Stopping Criterion
The Sign Change Ratio (SCR) Criterion
The Hard Decision Aided (HDA) Criterion
14.3.15 Modifications of the MAP Algorithm
The Max-Log-MAP Algorithm
14.3.16 Corrections to the Max-Log-MAP Algorithm
14.3.17 The Soft Output Viterbi Algorithm
14.4 On the Error Floor and Weight Distributions
14.4.1 The Error Floor
14.4.2 Spectral Thinning and Random Interleavers
14.4.3 On Interleavers
14.5 EXIT Chart Analysis
14.5.1 The EXIT Chart
14.6 Block Turbo Coding
14.7 Turbo Equalization
14.7.1 Introduction to Turbo Equalization
14.7.2 The Framework for Turbo Equalization
Lab 12 Turbo Code Decoding
Objective
Background
Programming Part
14.8 Exercises
14.9 References
15 Low-Density Parity-Check Codes
15.1 Introduction
15.2 LDPC Codes: Construction and Notation
15.3 Tanner Graphs
15.4 Transmission Through a Gaussian Channel
15.5 Decoding LDPC Codes
15.5.1 The Vertical Step: Updating qmn (x)
15.5.2 Horizontal Step: Updating rmn (x)
15.5.3 Terminating and Initializing the Decoding Algorithm
15.5.4 Summary of the Algorithm
15.5.5 Message Passing Viewpoint
15.5.6 Likelihood Ratio Decoder Formulation
15.6 Why Low-Density Parity-Check Codes?
15.7 The Iterative Decoder on General Block Codes
15.8 Density Evolution
15.9 EXIT Charts for LDPC Codes
15.10 Irregular LDPC Codes
15.10.1 Degree Distribution Pairs
15.10.2 Some Good Codes
15.10.3 Density Evolution for Irregular Codes
15.10.4 Computation and Optimization of Density Evolution
15.10.5 Using Irregular Codes
15.11 More on LDPC Code Construction
15.11.1 A Construction Based on Finite Geometries
15.11.2 Constructions Based on Other Combinatoric Objects
15.12 Encoding LDPC Codes
15.13 A Variation: Low-Density Generator Matrix Codes
15.14 Serial Concatenated Codes; Repeat-Accumulate Codes
15.14.1 Irregular RA Codes
Lab 13 Programming an LDPC Decoder
Objective
Background
Assignment
Numerical Considerations
15.15 Exercises
15.16 References
16 Decoding Algorithms on Graphs
16.1 Introduction
16.2 Operations in Semirings
16.3 Functions on Local Domains
16.4 Factor Graphs and Marginalization
16.4.1 Marginalizing on a Single Variable
16.4.2 Marginalizing on All Individual Variables
16.5 Applications to Coding
16.5.1 Block Codes
16.5.2 Modifications to Message Passing for Binary Variables
16.5.3 Trellis Processing and the Forward/Backward Algorithm
16.5.4 Turbo Codes
16.6 Summary of Decoding Algorithms on Graphs
16.7 Transformations of Factor Graphs
16.7.1 Clustering
16.7.2 Stretching Variable Nodes
16.7.3 Exact Computation of Graphs with Cycles
16.8 Exercises
16.9 References
Part V Space-Time Coding
17 Fading Channels and Space-Time Codes
17.1 Introduction
17.2 Fading Channels
17.2.1 Rayleigh Fading
17.3 Diversity Transmission and Reception: The MIMO Channel
17.3.1 The Narrowband MIMO Channel
17.3.2 Diversity Performance with Maximal-Ratio Combining
17.4 Space-Time Block Codes
17.4.1 The Alamouti Code
17.4.2 A More General Formulation
17.4.3 Performance Calculation
Real Orthogonal Designs
Encoding and Decoding Based on Orthogonal Designs
Generalized Real Orthogonal Designs
17.4.4 Complex Orthogonal Designs
Future Work
17.5 Space-Time Trellis Codes
17.5.1 Concatenation
17.6 How Many Antennas?
17.7 Estimating Channel Information
17.8 Exercises
17.9 References
A Log Likelihood Algebra
A.l Exercises
References
Index
๐ SIMILAR VOLUMES
<p><b>Providing in-depth treatment of error correction</b>ย </p> <p><i>Error Correction Coding: Mathematical Methods and Algorithms, 2nd Edition</i><i>ย </i>provides a comprehensive introduction to classical and modern methods of error correction.ย The presentation provides a clear, practical introduct
<p><b>Providing in-depth treatment of error correction</b>ย </p> <p><i>Error Correction Coding: Mathematical Methods and Algorithms, 2nd Edition</i><i>ย </i>provides a comprehensive introduction to classical and modern methods of error correction.ย The presentation provides a clear, practical introduct
<p><b>Providing in-depth treatment of error correction</b>ย </p> <p><i>Error Correction Coding: Mathematical Methods and Algorithms, 2nd Edition</i><i>ย </i>provides a comprehensive introduction to classical and modern methods of error correction.ย The presentation provides a clear, practical introduct
An unparalleled learning tool and guide to error correction codingError correction coding techniques allow the detection and correction of errors occurring during the transmission of data in digital communication systems. These techniques are nearly universally employed in modern communication syste
An unparalleled learning tool and guide to error correction coding Error correction coding techniques allow the detection and correction of errors occurring during the transmission of data in digital communication systems. These techniques are nearly universally employed in modern communica