This book is devoted to one of the essential functions of modern telecommunications systems: channel coding or error correction coding. Its main topic is iteratively decoded algebraic codes, convolutional codes and concatenated codes. It also presents digital modulation with which channel coding is
Codes and turbo codes (Collection IRIS)
β Scribed by Claude Berrou (editor)
- Publisher
- Springer
- Year
- 2010
- Tongue
- English
- Leaves
- 424
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
This book is devoted to one of the essential functions of modern telecommunications systems: channel coding or error correction coding. Its main topic is iteratively decoded algebraic codes, convolutional codes and concatenated codes. It also presents digital modulation with which channel coding is closely associated to make up the heart of the physical layer of telecommunications systems. The most important theoretical aspects are given, and the building of codes is detailed and justified. Decoding algorithms are developed and, whenever possible, accompanied by simulation results characteristic of their correcting power. The authors are researchers and lecturers recognised for their expertise in the field of encoding and decoding algorithms and associated circuits. Codes and Turbo Codes is intended both as a means for discovering the domain, a valuable source of information about the many techniques imagined since the mid-XXth century, and as a step towards addressing problems not yet entirely solved.
β¦ Table of Contents
Title Page
Copyright Page
Codes and Turbo Codes
Foreword
Table of Contents
Chapter 1 Introduction
1.1 Digital messages
1.2 A first code
1.3 Hard input decoding and soft input decoding
1.4 Hard output decoding and soft output decoding
1.5 The performance measure
1.6 What is a good code?
1.7 Families of codes
Chapter 2 Digital communications
2.1 DigitalModulations
2.1.1 Introduction
2.1.2 Linear Memoryless Modulations
Amplitude-shift keying with M states: M-ASK
Phase Shift Keying with M states (M-PSK)
Quadrature Amplitude Modulation using two quadrature carriers (MQAM)
2.1.3 Memoryless modulation with M states (M-FSK)
2.1.4 Modulations with memory by continuous phase frequency shift keying (CPFSK)
Continuous phase-frequency-shift keying with modulation index h = 1/2: Minimum Shift Keying (MSK)
L-ary Raised Cosine modulation (L-RC)
Gaussian minimum shift keying modulation (GMSK)
2.2 Structure and performance of the optimal receiver on a Gaussian channel
2.2.1 Structure of the coherent receiver
2.2.2 Performance of the coherent receiver
Amplitude shift keying with M states
Phase shift keying with M states
Amplitude modulation on two quadrature carriers β M-QAM
Frequency shift keying β M-FSK
Continuous phase frequency shift keying β CPFSK
2.3 Transmission on a band-limited channel
2.3.1 Introduction
2.3.2 Intersymbol interference
2.3.3 Condition of absence of ISI: Nyquist criterion
Optimal distribution of filtering between transmission and reception
2.3.4 Expression of the error probability in presence of Nyquist filtering
2.4 Transmission on fading channels
2.4.1 Characterization of a fading channel
Coherence bandwidth
Coherence time
2.4.2 Transmission on non-frequency-selective slow-fading channels
Performance on a Rayleigh channel
Performance on a Rayleigh channel with diversity
Transmission on a slow-fading frequency-selective channel
Transmission with equalization at reception
Chapter 3 Theoretical limits
3.1 Information theory
3.1.1 Transmission channel
3.1.2 An example: the binary symmetric channel
Configurations of errors on the binary symmetric channel
Mutual information and capacity of the binary symmetric channel
3.1.3 Overview of the fundamental coding theorem
3.1.4 Geometrical interpretation
3.1.5 Random coding
Codes imitating random coding
3.2 Theoretical limits to performance
3.2.1 Binary input and real output channel
3.2.2 Capacity of a transmission channel
Shannon limit of a band-limited continuous input and output Gaussian channel
Capacity of a discrete input Gaussian channel
Capacity of the Rayleigh channel
3.3 Practical limits to performance
3.3.1 Gaussian binary input channel
3.3.2 Gaussian continuous input channel
3.3.3 Some examples of limits
3.4 Minimum distances required
3.4.1 MHD required with 4-PSK modulation
3.4.2 MHD required with 8-PSK modulation
3.4.3 MHD required with 16-QAM modulation
Bibliography
Chapter 4 Block codes
4.1 Block codes with binary symbols
4.1.1 Generator matrix of a binary block code
4.1.2 Dual code and parity check matrix
4.1.3 Minimum distance
4.1.4 Extended codes and shortened codes
4.1.5 Product codes
4.1.6 Examples of binary block codes
Parity check code
Repetition code
Hamming code
Maximum length code
Hadamard code
Reed-Muller codes
4.1.7 Cyclic codes
Definition and polynomial representation
Cyclic code in systematic form
Implementation of an encoder
BCH codes
4.2 Block codes with non-binary symbols
4.2.1 Reed-Solomon codes
4.2.2 Implementing the encoder
4.3 Decoding and performance of codes with binary symbols
4.3.1 Error detection
Detection capability
Probability of non-detection of errors
4.3.2 Error correction
Hard decoding
Soft decoding
4.4 Decoding and performance of codes with non-binary symbols . .
4.4.1 Hard input decoding of Reed-Solomon codes
4.4.2 Petersonβs directmethod
Description of the algorithm for codes with non-binary symbols
Simplification of Petersonβs algorithm for binary codes
Chien algorithm
4.4.3 Iterativemethod
Berlekamp-Massey algorithm for codes with non-binary symbols
Euclidβs algorithm
Calculating coefficients ej by a transform
Berlekamp-Massey algorithm for binary cyclic codes
Euclidβs algorithm for binary codes
4.4.4 Hard input decoding performance of Reed-Solomon codes
Bibliography
Appendix: Notions about Galois fields and minimal polynomials
Definition
Primitive element of a Galois field
Minimal polynomial with coefficients in F2 associated with an element of a Galois field Fq
Minimal polynomial with coefficients in Fq associated with an element in a Galois field Fq
Primitive polynomials
Chapter 5 Convolutional codes and their decoding
5.1 History
5.2 Representations of convolutional codes
5.2.1 Generic representation of a convolutional encoder
5.2.2 Polynomial representation
5.2.3 Tree of a code
5.2.4 Trellis of a code
5.2.5 Statemachine of a code
5.3 Code distances and performance
5.3.1 Choosing a good code
5.3.2 RTZ sequences
5.3.3 Transfer function and distance spectrum
5.3.4 Performance
5.4 Decoding convolutional codes
5.4.1 Model of the transmission chain and notations
5.4.2 The Viterbi algorithm
Example of applying the Viterbi algorithm
5.4.3 The Maximum A Posteriori algorithm or MAP algorithm
5.5 Convolutional block codes
5.5.1 Trellis termination
Classical trellis termination
Tail-biting
5.5.2 Puncturing
Bibliography
Chapter 6 Concatenated codes
6.1 Parallel concatenation and serial concatenation
6.2 Parallel concatenation and LDPC codes
6.3 Permutations
6.4 Turbo crossword
Bibliography
Chapter 7 Convolutional turbo codes
7.1 The history of turbo codes
7.2 Multiple concatenation of RSC codes
7.3 Turbo codes
7.3.1 Termination of constituent codes
7.3.2 The permutation function
Regular permutation
Necessity for disorder
Intra-symbol disorder
Irregular permutations
Quadratic Permutation
7.4 Decoding turbo codes
7.4.1 Turbo decoding
7.4.2 SISO decoding and extrinsic information
Notations
Decoding following the Maximum A Posteriori (MAP) criterion
The simplified Max-Log-MAP or SubMAP algorithm
7.4.3 Practical considerations
7.5 m-binary turbo codes
7.5.1 mβbinary RSC encoders
7.5.2 m-binary turbo codes
8-state double-binary turbo code
16-state double-binary turbo code
7.6 Analysis tools
7.6.1 Theoretical performance
7.6.2 Asymptotic behaviour
Error impulse method
Modified error impulse method
Double error impulse method
7.6.3 Convergence
Transfer function for a SISO decoder of extrinsic information
a. Definition of the mutual information (MI)
b. Definition of the a priori mutual information
c. Calculation of the output mutual information
d. Practical method to obtain the transfer function of the extrinsic information
e. An example
EXIT chart
Bibliography
Chapter 8 Turbo product codes
8.1 History
8.2 Product codes
Definition
Example 8.1
8.3 Hard input decoding of product codes
8.3.1 Row-column decoding
Property
Example 8.2
8.3.2 The Reddy-Robinson algorithm
Example 8.3
8.4 Soft input decoding of product codes
8.4.1 The Chase algorithm with weighted input
Example 8.4
8.4.2 Performance of the Chase-Pyndiah algorithm
8.4.3 The Fang-Battail algorithm
Example 8.5
8.4.4 The Hartmann-Nazarov algorithm
Fast Hadamard transform
Example 8.6
8.4.5 Other soft input decoding algorithms
8.5 Implantation of the Chase-Pyndiah algorithm
Bibliography
Chapter 9 LDPC codes
9.1 Principle of LDPC codes
9.1.1 Parity check code
Definition
Parity code with three bits
Parity check code with n bits
9.1.2 Definition of an LDPC code
Linear block codes
Low density parity check codes
Coding rate
9.1.3 Encoding
Generic encoding
Specific constructions
Summary
Analogy between an LDPC code and a turbo code
9.1.4 Decoding LDPC codes
Hard input algorithm
Belief propagation algorithm
9.1.5 Randomconstruction of LDPC codes
Optimization of irregularity profiles
Optimization of cycle size
Selecting the code by the impulse method
Selecting the code by simulation
9.1.6 Some geometrical constructions of LDPC codes
Cayley / Ramanujan constructions
Kou, Lin and Fossorierβs Euclidian / Projective Geometry LDPC
Constructions based on permutation matrices
Matrices based on Pseudo random generators
Array-based LDPC
BIBDs, Latin rectangles
9.2 Architecture for decoding LDPC codes for the Gaussian channel
9.2.1 Analysis of the complexity
9.2.2 Architecture of a generic node processor (GNP)
Choice of a generic operator
9.2.3 Generic architecture for message propagation
Presentation of the model
Example of an implementation
9.2.4 Combining parameters of the architecture
9.2.5 Example of synthesis of an LDPC decoder architecture . .
Flooding schedule (according to parities)
Horizontal interleaving schedule
9.2.6 Sub-optimal decoding algorithm
Single message decoding algorithm (Ξ = 1)
Sub-optimal PNP algorithms (Ξ > 1)
9.2.7 Influence of quantization
9.2.8 State of the art of published LDPC decoder architectures
Bibliography
Chapter 10 Turbo codes and large spectral efficiency transmissions
10.1 Turbo trellis coded modulation (TTCM)
10.2 Pragmatic turbo coded modulation
Bibliography
Chapter 11 The turbo principle applied to equalization and detection
11.1 Turbo equalization
11.1.1 Multipath channels and intersymbol interference
11.1.2 The equalization function
11.1.3 Combining equalization and decoding
11.1.4 Principle of turbo equalization
11.1.5 MAP turbo equalization
Implementation of the BCJR-MAP equalizer
Example of performance
Complexity of the MAP turbo equalizer and alternative solutions
Architectures and applications
11.1.6 MMSE turbo equalization
Principle of soft-input soft-ouput linear MMSE equalization
Adaptive implementation of the equalizer
Examples of performance
Example of implementation and applications
11.2 Multi-user turbo detection and its application to CDMA systems
11.2.1 Introduction and some notations
11.2.2 Multi-user detection
Standard receiver
Optimal joint detection
Decorrelator detector
Linear MMSE detector
Iterative detector
11.2.3 Turbo CDMA
Turbo SIC detector
Some simulations
Turbo SIC/RAKE detector
11.3 Conclusions
Bibliography
Index
π SIMILAR VOLUMES
In the last fifty years, research in error control coding has brought forth many advances. Recently, modern error control coding methods based on turbo coding have essentially solved the problem of reliable data communications over noisy channels. This book provides both industrial and academic com
* Trellis and turbo coding are used to compress and clean communications signals to allow greater bandwidth and clarity * Presents the basics, theory, and applications of these techniques with a focus on potential standard state-of-the art methods in the future * Provides a classic basis for
Trellis and turbo coding are used to compress and clean communications signals to allow greater bandwidth and clarity. Presents the basics, theory, and applications of these techniques with a focus on potential standard state-of-the art methods in the future. Provides a classic basis for anyone who
<p>When the 50th anniversary of the birth of Information Theory was celebrated at the 1998 IEEE International Symposium on InformaΒ tion Theory in Boston, there was a great deal of reflection on the the year 1993 as a critical year. As the years pass and more perspecΒ tive is gained, it is a fairly