Guessing random additive noise decoding (GRAND) in concatenated schemes: An investigation of complexity and BER behavior
(2024) EITM02 20241Department of Electrical and Information Technology
- Abstract
- Recently, a newly developed family of decoders named as Guessing Random Additive Noise Decoder (GRAND) has been gaining attention. The key idea of the decoder is to look for the most probable error vectors that interacted with the transmitted message, producing the received block. Among the different variations of the algorithm, the soft-input soft-output (SISO) decoder, known as SO-GRAND, emerges as a good candidate for using with iterative schemes by employing list decoding. In this work, SO-GRAND's complexity and performance are analyzed against a trellis-based soft output decoder. These decoders are used as concatenated components of a generalized LDPC code. Results show that, while the soft-input hard-output version of GRAND (known as... (More)
- Recently, a newly developed family of decoders named as Guessing Random Additive Noise Decoder (GRAND) has been gaining attention. The key idea of the decoder is to look for the most probable error vectors that interacted with the transmitted message, producing the received block. Among the different variations of the algorithm, the soft-input soft-output (SISO) decoder, known as SO-GRAND, emerges as a good candidate for using with iterative schemes by employing list decoding. In this work, SO-GRAND's complexity and performance are analyzed against a trellis-based soft output decoder. These decoders are used as concatenated components of a generalized LDPC code. Results show that, while the soft-input hard-output version of GRAND (known as ORBGRAND) demonstrates good complexity qualities, the same is not true for the SO-GRAND that, in general, does not match the amount of computations of the trellis algorithm for typical scenarios of list size. Nevertheless, SO-GRAND has the nice property of trading off complexity in the expense of performance. This approach may look interesting when performance is good enough. Better performance of SO-GRAND algorithms are expected for higher order approximations in the ORBGRAND algorithm. The study also features several simulations with different types of codes (CRC, Hamming and random), demonstrating the universality quality of both trellis-based and SO-GRAND algorithms. (Less)
- Popular Abstract
- Error-correcting codes are part of standardized protocols found in everyday technologies, such as QR codes, the 5G standard, and compact discs. These codes function by adding redundancy to data, enabling the correction of errors at the receiver. In communication systems, various strategies are employed to infer the correct set of transmitted symbols (or bits). The optimal solution is called maximum likelihood (ML) decoder. Although this strategy gives the best possible performance, it is often the case that the ML decoder algorithm is not efficient in terms of computational complexity.
In the look for these efficient decoder algorithms, researches have rediscovered, at the end of the 90’s, the “low-density parity-check” (LDPC) codes.... (More) - Error-correcting codes are part of standardized protocols found in everyday technologies, such as QR codes, the 5G standard, and compact discs. These codes function by adding redundancy to data, enabling the correction of errors at the receiver. In communication systems, various strategies are employed to infer the correct set of transmitted symbols (or bits). The optimal solution is called maximum likelihood (ML) decoder. Although this strategy gives the best possible performance, it is often the case that the ML decoder algorithm is not efficient in terms of computational complexity.
In the look for these efficient decoder algorithms, researches have rediscovered, at the end of the 90’s, the “low-density parity-check” (LDPC) codes. This capacity achieving coding scheme is nowadays one of the most used options for providing high throughput with relatively long code. A testament to its success is the incorporation of LDPC codes into the 5G standard. The generalized version of this decoder, known as GLDPC, allows for flexibility as it supports any type of component code in its design.
This thesis investigates the performance and complexity trade-offs of the recently developed SO-GRAND (Soft-Output Guessing Random Additive Noise Decoder), compared to a trellis-based algorithm. Both decoders analyzed are used as concatenated schemes of GLDPC codes. The analysis shows that, while both SO-GRAND and trellis algorithms share similar proportional complexity with respect to block length and redundancy, SO-GRAND demands more computations due to factors like list size and the calculation of probabilities for noise vectors. However, GRAND algorithms are more flexible, offering configurable trade-offs between complexity and bit-error rate (BER) performance, making them promising candidates for future research. Additionally, the soft-input version of the algorithm shows competitive complexity compared to the trellis approach.
By detailing the strengths, weaknesses, and practical challenges of GRAND algorithms, this study hopefully contributes to advancing the understanding of these decoders and their potential in modern communication systems. (Less)
Please use this url to cite or link to this publication:
http://lup.lub.lu.se/student-papers/record/9176689
- author
- Chrestani, Giovani LU
- supervisor
- organization
- course
- EITM02 20241
- year
- 2024
- type
- H2 - Master's Degree (Two Years)
- subject
- keywords
- Channel Coding, GRAND, Generalized LDPC, Forward Error Correction, Iterative Decoders
- report number
- LU/LTH-EIT 2024-1025
- language
- English
- id
- 9176689
- date added to LUP
- 2024-10-22 11:14:41
- date last changed
- 2024-10-22 11:14:41
@misc{9176689, abstract = {{Recently, a newly developed family of decoders named as Guessing Random Additive Noise Decoder (GRAND) has been gaining attention. The key idea of the decoder is to look for the most probable error vectors that interacted with the transmitted message, producing the received block. Among the different variations of the algorithm, the soft-input soft-output (SISO) decoder, known as SO-GRAND, emerges as a good candidate for using with iterative schemes by employing list decoding. In this work, SO-GRAND's complexity and performance are analyzed against a trellis-based soft output decoder. These decoders are used as concatenated components of a generalized LDPC code. Results show that, while the soft-input hard-output version of GRAND (known as ORBGRAND) demonstrates good complexity qualities, the same is not true for the SO-GRAND that, in general, does not match the amount of computations of the trellis algorithm for typical scenarios of list size. Nevertheless, SO-GRAND has the nice property of trading off complexity in the expense of performance. This approach may look interesting when performance is good enough. Better performance of SO-GRAND algorithms are expected for higher order approximations in the ORBGRAND algorithm. The study also features several simulations with different types of codes (CRC, Hamming and random), demonstrating the universality quality of both trellis-based and SO-GRAND algorithms.}}, author = {{Chrestani, Giovani}}, language = {{eng}}, note = {{Student Paper}}, title = {{Guessing random additive noise decoding (GRAND) in concatenated schemes: An investigation of complexity and BER behavior}}, year = {{2024}}, }