𝔖 Scriptorium
✦   LIBER   ✦

📁

Error Correction Coding: Mathematical Methods and Algorithms

✍ Scribed by Todd K. Moon


Publisher
Wiley
Year
2021
Tongue
English
Leaves
995
Edition
2
Category
Library

⬇  Acquire This Volume

No coin nor oath required. For personal study only.

✦ Synopsis


Providing in-depth treatment of error correction 

Error Correction Coding: Mathematical Methods and Algorithms, 2nd Edition provides a comprehensive introduction to classical and modern methods of error correction. The presentation provides a clear, practical introduction to using a lab-oriented approach.  Readers are encouraged to implement the encoding and decoding algorithms with explicit algorithm statements and the mathematics used in error correction, balanced with an algorithmic development on how to actually do the encoding and decoding. Both block and stream (convolutional) codes are discussed, and the mathematics required to understand them are introduced on a “just-in-time” basis as the reader progresses through the book. 

The second edition increases the impact and reach of the book, updating it to discuss recent important technological advances. New material includes: 

  • Extensive coverage of LDPC codes, including a variety of decoding algorithms. 
  • A comprehensive introduction to polar codes, including systematic encoding/decoding and list decoding.   
  • An introduction to fountain codes. 
  • Modern applications to systems such as HDTV, DVBT2, and cell phones 

Error Correction Coding includes extensive program files (for example, C++ code for all LDPC decoders and polar code decoders), laboratory materials for students to implement algorithms, and an updated solutions manual, all of which are perfect to help the reader understand and retain the content.  

The book covers classical BCH, Reed Solomon, Golay, Reed Muller, Hamming, and convolutional codes which are still component codes in virtually every modern communication system. There are also fulsome discussions of recently developed polar codes and fountain codes that serve to educate the reader on the newest developments in error correction. 

✦ Table of Contents


Cover
Title Page
Copyright
Contents
Preface
List of Program Files
List of Laboratory Exercises
List of Algorithms
List of Figures
List of Tables
List of Boxes
About the Companion Website
Part I Introduction and Foundations
Chapter 1 A Context for Error Correction Coding
1.1 Purpose of This Book
1.2 Introduction: Where Are Codes?
1.3 The Communications System
1.4 Basic Digital Communications
1.4.1 Binary Phase‐Shift Keying
1.4.2 More General Digital Modulation
1.5 Signal Detection
1.5.1 The Gaussian Channel
1.5.2 MAP and ML Detection
1.5.3 Special Case: Binary Detection
1.5.4 Probability of Error for Binary Detection
1.5.5 Bounds on Performance: The Union Bound
1.5.6 The Binary Symmetric Channel
1.5.7 The BSC and the Gaussian Channel Model
1.6 Memoryless Channels
1.7 Simulation and Energy Considerations for Coded Signals
1.8 Some Important Definitions and a Trivial Code: Repetition Coding
1.8.1 Detection of Repetition Codes Over a BSC
1.8.2 Soft‐Decision Decoding of Repetition Codes Over the AWGN
1.8.3 Simulation of Results
1.8.4 Summary
1.9 Hamming Codes
1.9.1 Hard‐Input Decoding Hamming Codes
1.9.2 Other Representations of the Hamming Code
1.9.2.1 An Algebraic Representation
1.9.2.2 A Polynomial Representation
1.9.2.3 A Trellis Representation
1.9.2.4 The Tanner Graph Representation
1.10 The Basic Questions
1.11 Historical Milestones of Coding Theory
1.12 A Bit of Information Theory
1.12.1 Information‐ Theoretic Definitions for Discrete Random Variables
1.12.1.1 Entropy and Conditional Entropy
1.12.1.2 Relative Entropy, Mutual Information, and Channel Capacity
1.12.2 Data Processing Inequality
1.12.3 Channels
1.12.3.1 Binary Symmetric Channel
1.12.3.2 Binary Erasure Channel
1.12.3.3 Noisy Typewriter
1.12.3.4 Symmetric Channels
1.12.4 Channel Capacity
1.12.5 Information Theoretic Definitions for Continuous Random Variables
1.12.6 The Channel Coding Theorem
1.12.7 “Proof” of the Channel Coding Theorem
1.12.8 Capacity for the Continuous‐Time AWGN Channel
1.12.9 Transmission at Capacity with Errors
1.12.10 The Implication of the Channel Coding Theorem
1.12.11 Non‐Asymptotic Information Theory
1.12.11.1 Discrete Channels
1.12.11.2 The AWGN Channel
1.12.11.3 Comparison of Codes
Programming Laboratory 1: Simulating a Communications Channel
Objective
Background
Use of Coding in Conjunction with the BSC
Assignment
Preliminary Exercises
Programming Part
BPSK Simulation
Resources and Implementation Suggestions
1.13 Exercises
1.14 References
Part II Block Codes
Chapter 2 Groups and Vector Spaces
2.1 Introduction
2.2 Groups
2.2.1 Subgroups
2.2.2 Cyclic Groups and the Order of an Element
2.2.3 Cosets
2.2.4 Lagrange's Theorem
2.2.5 Induced Operations; Isomorphism
2.2.6 Homomorphism
2.3 Fields: A Prelude
2.4 Review of Linear Algebra
2.5 Exercises
2.6 References
Chapter 3 Linear Block Codes
3.1 Basic Definitions
3.2 The Generator Matrix Description of Linear Block Codes
3.2.1 Rudimentary Implementation
3.3 The Parity Check Matrix and Dual Codes
3.3.1 Some Simple Bounds on Block Codes
3.4 Error Detection and Correction Over Hard‐Output Channels
3.4.1 Error Detection
3.4.2 Error Correction: The Standard Array
3.5 Weight Distributions of Codes and Their Duals
3.6 Binary Hamming Codes and Their Duals
3.7 Performance of Linear Codes
3.7.1 Error Detection Performance
3.7.2 Error Correction Performance
3.7.3 Performance for Soft‐Decision Decoding
3.8 Erasure Decoding
3.8.1 Binary Erasure Decoding
3.9 Modifications to Linear Codes
3.10 Best‐Known Linear Block Codes
3.11 Exercises
3.12 References
Chapter 4 Cyclic Codes, Rings, and Polynomials
4.1 Introduction
4.2 Basic Definitions
4.3 Rings
4.3.1 Rings of Polynomials
4.4 Quotient Rings
4.5 Ideals in Rings
4.6 Algebraic Description of Cyclic Codes
4.7 Nonsystematic Encoding and Parity Check
4.8 Systematic Encoding
4.9 Some Hardware Background
4.9.1 Computational Building Blocks
4.9.2 Sequences and Power Series
4.9.3 Polynomial Multiplication
4.9.3.1 Last‐Element‐First Processing
4.9.3.2 First‐Element‐First Processing
4.9.4 Polynomial Division
4.9.4.1 Last‐Element‐First Processing
4.9.5 Simultaneous Polynomial Division and Multiplication
4.9.5.1 First‐Element‐First Processing
4.10 Cyclic Encoding
4.11 Syndrome Decoding
4.12 Shortened Cyclic Codes
4.12.1 Method 1: Simulating the Extra Clock Shifts
4.12.2 Method 2: Changing the Error Pattern Detection Circuit
4.13 Binary CRC Codes
4.13.1 Byte‐Oriented Encoding and Decoding Algorithms
4.13.2 CRC Protecting Data Files or Data Packets
4.A Linear Feedback Shift Registers
4.A.1 Basic Concepts
Appendix 4.A.2 Connection With Polynomial Division
Appendix 4.A.3 Some Algebraic Properties of Shift Sequences
Programming Laboratory 2: Polynomial Division and Linear Feedback Shift Registers
4.14.3 Objective
4.14.3 Preliminary Exercises
4.14.3 Programming Part: BinLFSR
4.14.3 Resources and Implementation Suggestions
4.14.3 Programming Part: BinPolyDiv
4.14.3 Follow‐On Ideas and Problems
Programming Laboratory 3: CRC Encoding and Decoding
4.14.3 Objective
4.14.3 Preliminary
4.14.3 Programming Part
4.14.3 Resources and Implementation Suggestions
4.14 Exercise
4.15 References
Chapter 5 Rudiments of Number Theory and Algebra
5.1 Motivation
5.2 Number Theoretic Preliminaries
5.2.1 Divisibility
5.2.2 The Euclidean Algorithm and Euclidean Domains
5.2.3 An Application of the Euclidean Algorithm: The Sugiyama Algorithm
5.2.4 Congruence
5.2.5 The ϕ Function
5.2.6 Some Cryptographic Payoff
5.2.6.1 Fermat's Little Theorem
5.2.6.2 RSA Encryption
5.3 The Chinese Remainder Theorem
5.3.1 The CRT and Interpolation
5.3.1.1 The Evaluation Homomorphism
5.3.1.2 The Interpolation Problem
5.4 Fields
5.4.1 An Examination of R and C
5.4.2 Galois Field Construction: An Example
5.4.3 Connection with Linear Feedback Shift Registers
5.5 Galois Fields: Mathematical Facts
5.6 Implementing Galois Field Arithmetic
5.6.1 Zech Logarithms
5.6.2 Hardware Implementations
5.7 Subfields of Galois Fields
5.8 Irreducible and Primitive Polynomials
5.9 Conjugate Elements and Minimal Polynomials
5.9.1 Minimal Polynomials
5.10 Factoring xn−1
5.11 Cyclotomic Cosets
5.A How Many Irreducible Polynomials Are There?
5.A.1 Solving for Im Explicitly: The Moebius Function
Programming Laboratory 4: Programming the Euclidean Algorithm
5.12.1 Objective
5.12.1 Preliminary Exercises
5.12.1 Background
5.12.1 Programming Part
Programming Laboratory 5: Programming Galois Field Arithmetic
5.12.1 Objective
5.12.1 Preliminary Exercises
5.12.1 Programming Part
5.12 Exercise
5.13 References
Chapter 6 BCH and Reed–Solomon Codes: Designer Cyclic Codes
6.1 BCH Codes
6.1.1 Designing BCH Codes
6.1.2 The BCH Bound
6.1.3 Weight Distributions for Some Binary BCH Codes
6.1.4 Asymptotic Results for BCH Codes
6.2 Reed–Solomon Codes
6.2.1 Reed–Solomon Construction 1
6.2.2 Reed–Solomon Construction 2
6.2.3 Encoding Reed–Solomon Codes
6.2.4 MDS Codes and Weight Distributions for RS Codes
6.3 Decoding BCH and RS Codes: The General Outline
6.3.1 Computation of the Syndrome
6.3.2 The Error Locator Polynomial
6.3.3 Chien Search
6.4 Finding the Error Locator Polynomial
6.4.1 Simplifications for Narrow‐Sense Binary Codes; Peterson's Algorithm
6.4.2 Berlekamp–Massey Algorithm
6.4.3 Characterization of LFSR Length in Massey's Algorithm
6.4.4 Simplifications for Binary Codes
6.5 Nonbinary BCH and RS Decoding
6.5.1 Forney's Algorithm
6.6 Euclidean Algorithm for the Error Locator Polynomial
6.7 Erasure Decoding for Nonbinary BCH or RS Codes
6.8 Galois Field Fourier Transform Methods
6.8.1 Equivalence of the Two Reed–Solomon Code Constructions
6.8.2 Frequency‐Domain Decoding
6.9 Variations and Extensions of Reed–Solomon Codes
6.9.1 Simple Modifications
6.9.2 Generalized Reed–Solomon Codes and Alternant Codes
6.9.3 Goppa Codes
6.9.4 Decoding Alternant Codes
6.9.5 Cryptographic Connections: The McEliece Public Key Cryptosystem
Programming Laboratory 6: Programming the Berlekamp–Massey Algorithm
6.9.5 Background
6.9.5 Assignment
6.9.5 Preliminary Exercises
6.9.5 Programming Part
6.9.5 Resources and Implementation Suggestions
Programming Laboratory 7: Programming the BCH Decoder
6.9.5 Objective
6.9.5 Preliminary Exercises
6.9.5 Programming Part
6.9.5 Resources and Implementation Suggestions
6.9.5 Follow‐On Ideas and Problems
Programming Laboratory 8: Reed–Solomon Encoding and Decoding
6.9.5 Objective
6.9.5 Background
6.9.5 Programming Part
6.A Proof of Newton's Identities
6.11 Exercise
6.12 References
Chapter 7 Alternate Decoding Algorithms for Reed–Solomon Codes
7.1 Introduction: Workload for Reed–Solomon Decoding
7.2 Derivations of Welch–Berlekamp Key Equation
7.2.1 The Welch–Berlekamp Derivation of the WB Key Equation
7.2.1.1 Single error in a message location
7.2.1.2 Multiple errors in message locations
7.2.1.3 Errors in check locations
7.2.2 Derivation from the Conventional Key Equation
7.3 Finding the Error Values
7.4 Methods of Solving the WB Key Equation
7.4.1 Background: Modules
7.4.2 The Welch–Berlekamp Algorithm
7.4.3 A Modular Approach to the Solution of the WB Key Equation
7.5 Erasure Decoding with the WB Key Equation
7.6 The Guruswami–Sudan Decoding Algorithm and Soft RS Decoding
7.6.1 Bounded Distance, ML, and List Decoding
7.6.2 Error Correction by Interpolation
7.6.3 Polynomials in Two Variables
7.6.3.1 Degree and Monomial Order
7.6.3.2 Zeros and Multiple Zeros
7.6.4 The GS Decoder: The Main Theorems
7.6.4.1 The Interpolation Theorem
7.6.4.2 The Factorization Theorem
7.6.4.3 The Correction Distance
7.6.4.4 The Number of Polynomials in the Decoding List
7.6.5 Algorithms for Computing the Interpolation Step
7.6.5.1 Finding Linearly Dependent Columns: The Feng–Tzeng Algorithm
7.6.5.2 Finding the Intersection of Kernels: The Kötter Algorithm
7.6.6 A Special Case: m=1 and L=1
7.6.7 An Algorithm for the Factorization Step: The Roth–Ruckenstein Algorithm
7.6.7.1 What to Do with Lists of Factors?
7.6.8 Soft‐Decision Decoding of Reed–Solomon Codes
7.6.8.1 Notation
7.6.8.2 A Factorization Theorem
7.6.8.3 Mapping from Reliability to Multiplicity
7.6.8.4 The Geometry of the Decoding Regions
7.6.8.5 Computing the Reliability Matrix
7.7 Exercises
7.8 References
Chapter 8 Other Important Block Codes
8.1 Introduction
8.2 Hadamard Matrices, Codes, and Transforms
8.2.1 Introduction to Hadamard Matrices
8.2.2 The Paley Construction of Hadamard Matrices
8.2.3 Hadamard Codes
8.3 Reed–Muller Codes
8.3.1 Boolean Functions
8.3.2 Definition of the Reed–Muller Codes
8.3.3 Encoding and Decoding Algorithms for First‐Order RM Codes
8.3.3.1 Encoding RM(1,m) Codes
8.3.3.2 Decoding RM(1,m) Codes
8.3.3.3 Expediting Decoding Using the Fast Hadamard Transform
8.3.4 The Reed Decoding Algorithm for RM(r,m) Codes, r≥1
8.3.4.1 Details for an RM(2,4) Code
8.3.4.2 A Geometric Viewpoint
8.3.5 Other Constructions of Reed–Muller Codes
8.4 Building Long Codes from Short Codes: The Squaring Construction
8.5 Quadratic Residue Codes
8.6 Golay Codes
8.6.1 Decoding the Golay Code
8.6.1.1 Algebraic Decoding of the 𝒢23 Golay Code
8.6.1.2 Arithmetic Decoding of the 𝒢24 Code
8.7 Exercises
8.8 References
Chapter 9 Bounds on Codes
9.1 The Gilbert–Varshamov Bound
9.2 The Plotkin Bound
9.3 The Griesmer Bound
9.4 The Linear Programming and Related Bounds
9.4.1 Krawtchouk Polynomials
9.4.2 Character
9.4.3 Krawtchouk Polynomials and Characters
9.5 The McEliece–Rodemich–Rumsey–Welch Bound
9.6 Exercises
9.7 References
Chapter 10 Bursty Channels, Interleavers, and Concatenation
10.1 Introduction to Bursty Channels
10.2 Interleavers
10.3 An Application of Interleaved RS Codes: Compact Discs
10.4 Product Codes
10.5 Reed–Solomon Codes
10.6 Concatenated Codes
10.7 Fire Codes
10.7.1 Fire Code Definition
10.7.2 Decoding Fire Codes: Error Trapping Decoding
10.8 Exercises
10.9 References
Chapter 11 Soft‐Decision Decoding Algorithms
11.1 Introduction and General Notation
11.2 Generalized Minimum Distance Decoding
11.2.1 Distance Measures and Properties
11.3 The Chase Decoding Algorithms
11.4 Halting the Search: An Optimality Condition
11.5 Ordered Statistic Decoding
11.6 Soft Decoding Using the Dual Code: The Hartmann Rudolph Algorithm
11.7 Exercises
11.8 References
Part III Codes on Graphs
Chapter 12 Convolutional Codes
12.1 Introduction and Basic Notation
12.1.1 The State
12.2 Definition of Codes and Equivalent Codes
12.2.1 Catastrophic Encoders
12.2.2 Polynomial and Rational Encoders
12.2.3 Constraint Length and Minimal Encoders
12.2.4 Systematic Encoders
12.3 Decoding Convolutional Codes
12.3.1 Introduction and Notation
12.3.2 The Viterbi Algorithm
12.3.3 Some Implementation Issues
12.3.3.1 The Basic Operation: Add‐Compare‐Select
12.3.3.2 Decoding Streams of Data: Windows on the Trellis
12.3.3.3 Output Decisions
12.3.3.4 Hard and Soft Decoding; Quantization
12.3.3.5 Synchronization Issues
12.4 Some Performance Results
12.5 Error Analysis for Convolutional Codes
12.5.1 Enumerating Paths Through the Trellis
12.5.1.1 Enumerating on More Complicated Graphs: Mason's Rule
12.5.2 Characterizing Node Error Probability Pe and Bit Error Rate Pb
12.5.3 A Bound on Pd for Discrete Channels
12.5.3.1 Performance Bound on the BSC
12.5.4 A Bound on Pd for BPSK Signaling Over the AWGN Channel
12.5.5 Asymptotic Coding Gain
12.6 Tables of Good Codes
12.7 Puncturing
12.7.1 Puncturing to Achieve Variable Rate
12.8 Suboptimal Decoding Algorithms for Convolutional Codes
12.8.1 Tree Representations
12.8.2 The Fano Metric
12.8.3 The Stack Algorithm
12.8.4 The Fano Algorithm
12.8.5 Other Issues for Sequential Decoding
12.8.5.1 Computational complexity
12.8.5.2 Code design
12.8.5.3 Variations on sequential decoding algorithms
12.8.6 A Variation on the Viterbi Algorithm: The M Algorithm
12.9 Convolutional Codes as Block Codes and Tailbiting Codes
12.10 A Modified Expression for the Path Metric
12.11 Soft Output Viterbi Algorithm (SOVA)
12.12 Trellis Representations of Block and Cyclic Codes
12.12.1 Block Codes
12.12.2 Cyclic Codes
12.12.3 Trellis Decoding of Block Codes
Programming Laboratory 9: Programming Convolutional Encoders
12.12.3 Objective
12.12.3 Background
12.12.3 Programming Part
Programming Laboratory 10: Convolutional Decoders: The Viterbi Algorithm
12.12.3 Objective
12.12.3 Background
12.12.3 Programming Part
12.13 Exercises
12.14 References
Chapter 13 Trellis‐Coded Modulation
13.1 Adding Redundancy by Adding Signals
13.2 Background on Signal Constellations
13.3 TCM Example
13.3.1 The General Ungerboeck Coding Framework
13.3.2 The Set Partitioning Idea
13.4 Some Error Analysis for TCM Codes
13.4.1 General Considerations
13.4.2 A Description of the Error Events
13.4.3 Known Good TCM Codes
13.5 Decoding TCM Codes
13.6 Rotational Invariance
13.6.1 Differential Encoding
13.6.2 Constellation Labels and Partitions
13.7 Multidimensional TCM
13.7.1 Some Advantages of Multidimensional TCM
13.7.1.1 Energy expansion advantage
13.7.1.2 Sphere‐packing advantages
13.7.1.3 Spectral efficiency
13.7.1.4 Rotational invariance
13.7.1.5 Signal shape
13.7.1.6 Peak‐to‐average power ratio
13.7.1.7 Decoding speed
13.7.2 Lattices and Sublattices
13.7.2.1 Basic Definitions
13.7.2.2 Common Lattices
13.7.2.3 Sublattices and Cosets
13.7.2.4 The Lattice Code Idea
13.7.2.5 Sources of Coding Gain in Lattice Codes
13.7.2.6 Some Good Lattice Codes
13.8 Multidimensional TCM Example: The V.34 Modem Standard
13.9 Exercises
Programming Laboratory 11: Trellis‐Coded Modulation Encoding and Decoding
13.10 References
Part IV Iteratively Decoded Codes
Chapter 14 Turbo Codes
14.1 Introduction
14.2 Encoding Parallel Concatenated Codes
14.3 Turbo Decoding Algorithms
14.3.1 The MAP Decoding Algorithm
14.3.2 Notation
14.3.3 Posterior Probability
14.3.4 Computing αt and βt
14.3.5 Computing γt
14.3.6 Normalization
14.3.7 Summary of the BCJR Algorithm
14.3.8 A Matrix/Vector Formulation
14.3.9 Comparison of the Viterbi Algorithm and the BCJR Algorithm
14.3.10 The BCJR Algorithm for Systematic Codes
14.3.11 Turbo Decoding Using the BCJR Algorithm
14.3.11.1 The Terminal State of the Encoders
14.3.12 Likelihood Ratio Decoding
14.3.12.1 Log Prior Ratio λp,t
14.3.12.2 Log Posterior λs,t(0)
14.3.13 Statement of the Turbo Decoding Algorithm
14.3.14 Turbo Decoding Stopping Criteria
14.3.14.1 The Cross Entropy Stopping Criterion
14.3.14.2 The Sign Change Ratio (SCR) Criterion
14.3.14.3 The Hard Decision Aided (HDA) Criterion
14.3.15 Modifications of the MAP Algorithm
14.3.15.1 The Max‐Log‐MAP Algorithm
14.3.16 Corrections to the Max‐Log‐MAP Algorithm
14.3.17 The Soft‐Output Viterbi Algorithm
14.4 On the Error Floor and Weight Distributions
14.4.1 The Error Floor
14.4.2 Spectral Thinning and Random Permuters
14.4.3 On Permuters
14.5 EXIT Chart Analysis
14.5.1 The EXIT Chart
14.6 Block Turbo Coding
14.7 Turbo Equalization
14.7.1 Introduction to Turbo Equalization
14.7.2 The Framework for Turbo Equalization
Programming Laboratory 12: Turbo Code Decoding
14.7.2 Objective
14.7.2 Background
14.7.2 Programming Part
14.8 Exercise
14.9 References
Chapter 15 Low‐Density Parity‐Check Codes: Introduction, Decoding, and Analysis
15.1 Introduction
15.2 LDPC Codes: Construction and Notation
15.3 Tanner Graphs
15.4 Decoding LDPC Codes
15.4.1 Decoding Using Log‐Likelihood Ratios
15.4.1.1 Log‐Likelihood Ratios
15.4.1.2 Log‐Likelihood Ratio of the Sum of Bits
15.4.1.3 Decoding: Message from a Check Node to a Variable Node
15.4.1.4 Log Likelihood of Repeated Observations About a Bit
15.4.1.5 Decoding: Message from a Variable Node to a Check Node
15.4.1.6 Inputs to the LDPC Decoding Algorithm
15.4.1.7 Terminating the Decoding Algorithm
15.4.1.8 Summary of Decoding: The Belief Propagation Algorithm
15.4.1.9 Messages on the Tanner Graph
15.4.2 Decoding Using Probabilities
15.4.2.1 Probability of Even Parity: Decoding at the Check Nodes
15.4.2.2 Probability of Independent Observations Decoding at a Variable Node
15.4.2.3 Computing the Leave‐One‐Out Product
15.4.2.4 Sparse Memory Organization
15.4.3 Variations on Decoding Algorithms: The Min‐Sum Decoder
15.4.3.1 The ⊞ Operation and the ϕ(x) Function
15.4.3.2 Attenuated and Offset Min‐Sum Decoders
15.4.4 Variations on Decoding Algorithms: Min‐Sum with Correction
15.4.4.1 Approximate min Decoder
15.4.4.2 The Reduced Complexity Box‐Plus Decoder
15.4.4.3 The Richardson–Novichkov Decoder
15.4.5 Hard‐Decision Decoding
15.4.5.1 Bit Flipping
15.4.5.2 Gallager's Algorithms A and B
15.4.5.3 Weighted Bit Flipping
15.4.5.4 Gradient Descent Bit Flipping
15.4.6 Divide and Concur Decoding
15.4.6.1 Summary of the Divide and Concur Algorithm
15.4.6.2 DC Applied to LDPC Decoding
15.4.6.3 The Divide Projections
15.4.6.4 The Concur Projection
15.4.6.5 A Message‐Passing Viewpoint of DC Decoding
15.4.7 Difference Map Belief Propagation Decoding
15.4.8 Linear Programming Decoding
15.4.8.1 Background on Linear Programming
15.4.8.2 Formulation of the Basic LP Decoding Algorithm
15.4.8.3 LP Relaxation
15.4.8.4 Examples and Discussion
15.4.9 Decoding on the Binary Erasure Channel
15.4.10 BEC Channels and Stopping Sets
15.5 Why Low‐Density Parity‐Check Codes?
15.6 The Iterative Decoder on General Block Codes
15.7 Density Evolution
15.8 EXIT Charts for LDPC Codes
15.9 Irregular LDPC Codes
15.9.1 Degree Distribution Pairs
15.9.2 Density Evolution for Irregular Codes
15.9.3 Computation and Optimization of Density Evolution
15.9.4 Using Irregular Codes
15.10 More on LDPC Code Construction
15.11 Encoding LDPC Codes
15.12 A Variation: Low‐Density Generator Matrix Codes
Programming Laboratory 13: Programming an LDPC Decoder
15.13 Exercise
15.14 References
Chapter 16 Low‐Density Parity‐Check Codes: Designs and Variations
16.1 Introduction
16.2 Repeat‐Accumulate Codes
16.2.1 Decoding RA Codes
16.2.2 Irregular RA Codes
16.2.3 RA Codes with Multiple Accumulators
16.3 LDPC Convolutional Codes
16.4 Quasi‐Cyclic Codes
16.4.1 QC Generator Matrices
16.4.2 Constructing QC Generator Matrices from QC Parity Check Matrices
16.4.2.1 Full Rank Case
16.4.2.2 Non‐Full Rank Case
16.5 Construction of LDPC Codes Based on Finite Fields
16.5.1 I. Construction of QC‐LDPC Codes Based on the Minimum‐Weight Codewords of a Reed–Solomon Code with Two Information Symbols
16.5.2 II. Construction of QC‐LDPC Codes Based on a Special Subclass of RS Codes
16.5.3 III. Construction of QC‐LDPC Codes Based on Subgroups of a Finite Field
16.5.4 IV. Construction of QC‐LDPC Codes Based on Subgroups of the Multiplicative Group of a Finite Field
16.5.5 Construction of QC‐LDPC Codes Based on Primitive Elements of a Field
16.6 Code Design Based on Finite Geometries
16.6.1 Rudiments of Euclidean Geometry
16.6.1.1 Points in EG(m,q)
16.6.1.2 Lines in EG(m,q)
16.6.1.3 Incidence vectors in EG
(m,q)
16.6.2 A Family of Cyclic EG‐LDPC Codes
16.6.3 Construction of LDPC Codes Based on Parallel Bundles of Lines
16.6.4 Constructions Based on Other Combinatoric Objects
16.7 Ensembles of LDPC Codes
16.7.1 Regular ensembles
16.7.2 Irregular Ensembles
16.7.3 Multi‐edge‐type Ensembles
16.8 Constructing LDPC Codes by Progressive Edge Growth (PEG)
16.9 Protograph and Multi‐Edge‐Type LDPC Codes
16.10 Error Floors and Trapping Sets
16.11 Importance Sampling
16.11.1 Importance Sampling: General Principles
16.11.2 Importance Sampling for Estimating Error Probability
16.11.2 Conventional Sampling (MC)
16.11.2 Importance Sampling (IS)
16.11.3 Importance Sampling for Tanner Trees
16.11.3.1 Single Parity‐Check Codes
16.11.3.2 Symmetric Tanner Trees
16.11.3.3 General Trees
16.11.3.4 Importance Sampling for LDPC Codes
16.12 Fountain Codes
16.12.1 Conventional Erasure Correction Codes
16.12.2 Tornado Codes
16.12.3 Luby Transform Codes
16.12.4 Raptor Codes
16.13 References
Part V Polar Codes
Chapter 17 Polar Codes
17.1 Introduction and Preview
17.2 Notation and Channels
17.3 Channel Polarization, N=2 Channel
17.3.1 Encoding
17.3.2 Synthetic Channels and Mutual Information
17.3.3 Synthetic Channel Transition Probabilities
17.3.4 An Example with N=2 Using the Binary Erasure Channel
17.3.5 Successive Cancellation Decoding
17.3.5.1 Log‐Likelihood Ratio Computations
17.3.5.2 Computing the Log‐Likelihood Function with Lower Complexity
17.3.6 Tree Representation
17.3.7 The Polar Coding Idea
17.4 Channel Polarization, N>2 Channels
17.4.1 Channel Combining: Extension from N/2 to N channels.
17.4.2 Pseudocode for Encoder for Arbitrary N
17.4.3 Transition Probabilities and Channel Splitting
17.4.4 Channel Polarization Demonstrated: An Example Using the Binary Erasure Channel for N>2
17.4.5 Polar Coding
17.4.6 Tree Representation
17.4.7 Successive Cancellation Decoding
17.4.8 SC Decoding from a Message Passing Point of View on the Tree
17.4.9 A Decoding Example with N=4
17.4.10 A Decoding Example with N=8
17.4.11 Pseudo‐Code Description of Successive Cancellation Algorithm
17.5 Some Theorems of Polar Coding Theory
17.5.1 I(W) and Z(W) for general B‐DMCs
17.5.2 Channel Polarization
17.5.3 The Polarization Theorem
17.5.4 A Few Facts About Martingales
17.5.5 Proof of the Polarization Theorem
17.5.6 Another Polarization Theorem
17.5.7 Rate of Polarization
17.5.8 Probability of Error Performance
17.6 Designing Polar Codes
17.6.1 Code Design by Battacharyya Bound
17.6.2 Monte Carlo Estimation of Z(WN(i))
17.7 Perspective: The Channel Coding Theorem
17.8 Systematic Encoding of Polar Codes
17.8.1 Polar Systematic Encoding via the Encoder Graph
17.8.2 Polar Systematic Encoding via Arıkan's Method
17.8.3 Systematic Encoding: The Bit Reverse Permutation
17.8.4 Decoding Systematically Encoded Codes
17.8.5 Flexible Systematic Encoding
17.8.6 Involutions and Domination Contiguity
17.8.7 Polar Codes and Domination Contiguity
17.8.8 Modifications for Polar Codes with Bit‐Reverse Permutation
17.9 List Decoding of Polar Codes
17.9.1 The Likelihood Data Structure P
17.9.2 Normalization
17.9.3 Code to Compute P
17.9.4 The Bits Data Structure C
17.9.5 Code to Compute C
17.9.6 Supporting Data Structures
17.9.7 Code for Polar List Decoding
17.9.8 An Example of List Decoding
17.9.9 Computational Complexity
17.9.10 Modified Polar Codes
17.10 LLR‐Based Successive Cancellation List Decoding
17.10.1 Implementation Considerations
17.11 Simplified Successive Cancellation Decoding
17.11.1 Modified SC Decoding
17.12 Relations with Reed–Muller Codes
17.13 Hardware and Throughput Performance Results
17.14 Generalizations, Extensions, and Variations
17.A BN is a Bit‐Reverse Permutation
17.B The Relationship of the Battacharyya Parameter to Channel Capacity
17.B.1 Error Probability for Two Codewords
17.B.2 Proof of Inequality (17.59)
17.B.3 Proof of Inequality (17.60) [16]
17.C Proof of Theorem 17.12
17.15 Exercises
17.16 References
Part VI Applications
Chapter 18 Some Applications of Error Correction in Modern Communication Systems
18.1 Introduction
18.2 Digital Video Broadcast T2 (DVB‐T2)
18.2.1 BCH Outer Encoding
18.2.2 LDPC Inner Encoding
18.3 Digital Cable Television
18.4 E‐UTRA and Long‐Term Evolution
18.4.1 LTE Rate 1/3 Convolutional Encoder
18.4.2 LTE Turbo Code
18.5 References
Part VII Space-Time Coding
Chapter 19 Fading Channels and Space‐Time Codes
19.1 Introduction
19.2 Fading Channels
19.2.1 Rayleigh Fading
19.3 Diversity Transmission and Reception: The MIMO Channel
19.3.1 The Narrowband MIMO Channel
19.3.2 Diversity Performance with Maximal‐Ratio Combining
19.4 Space‐Time Block Codes
19.4.1 The Alamouti Code
19.4.2 A More General Formulation
19.4.3 Performance Calculation
19.4.3.1 Real Orthogonal Designs
19.4.3.2 Encoding and Decoding Based on Orthogonal Designs
19.4.3.3 Generalized Real Orthogonal Designs
19.4.4 Complex Orthogonal Designs
19.4.4.1 Future Work
19.5 Space‐Time Trellis Codes
19.5.1 Concatenation
19.6 How Many Antennas?
19.7 Estimating Channel Information
19.8 Exercises
19.9 References
References
Index
EULA


📜 SIMILAR VOLUMES


Error Correction Coding: Mathematical Me
✍ Todd K. Moon 📂 Library 📅 2020 🏛 Wiley 🌐 English

<p><b>Providing in-depth treatment of error correction</b> </p> <p><i>Error Correction Coding: Mathematical Methods and Algorithms, 2nd Edition</i><i> </i>provides a comprehensive introduction to classical and modern methods of error correction. The presentation provides a clear, practical introduct

Error Correction Coding: Mathematical Me
✍ Todd K. Moon 📂 Library 📅 2021 🏛 Wiley 🌐 English

<p><b>Providing in-depth treatment of error correction</b> </p> <p><i>Error Correction Coding: Mathematical Methods and Algorithms, 2nd Edition</i><i> </i>provides a comprehensive introduction to classical and modern methods of error correction. The presentation provides a clear, practical introduct

Error Correction Coding: Mathematical Me
✍ Todd K. Moon 📂 Library 📅 2005 🏛 Wiley-Interscience 🌐 English

<span>An unparalleled learning tool and guide to error correction coding<br> <br> Error correction coding techniques allow the detection and correction of errors occurring during the transmission of data in digital communication systems. These techniques are nearly universally employed in modern com

Error Correction Coding: Mathematical Me
✍ Todd K. Moon 📂 Library 📅 2005 🏛 Wiley-Interscience 🌐 English

An unparalleled learning tool and guide to error correction codingError correction coding techniques allow the detection and correction of errors occurring during the transmission of data in digital communication systems. These techniques are nearly universally employed in modern communication syste

Error correction coding: mathematical me
✍ Todd K. Moon 📂 Library 📅 2005 🏛 Wiley-Interscience 🌐 English

An unparalleled learning tool and guide to error correction coding Error correction coding techniques allow the detection and correction of errors occurring during the transmission of data in digital communication systems. These techniques are nearly universally employed in modern communica