On the exact bit error probability for Viterbi decoding of convolutional codes
(2011) Information Theory and Applications Workshop (ITA), 2011 Abstract
 Forty years ago, Viterbi published upper bounds on both the first error event (burst error) and bit error probabilities for Viterbi decoding of convolutional codes. These bounds were derived using a signal flow chart technique for convolutional encoders. In 1995, Best et al. published a formula for the exact bit error probability for Viterbi decoding of the rate R=1/2, memory m=1 convolutional encoder with generator matrix G(D)=(1 1+D) when used to communicate over the binary symmetric channel. Their method was later extended to the rate R=1/2, memory m=2 generator matrix G(D)=(1+D^2 1+D+D^2) by Lentmaier et al.
In this paper, we shall use a different approach to derive the exact bit error probability. We derive and solve a... (More)  Forty years ago, Viterbi published upper bounds on both the first error event (burst error) and bit error probabilities for Viterbi decoding of convolutional codes. These bounds were derived using a signal flow chart technique for convolutional encoders. In 1995, Best et al. published a formula for the exact bit error probability for Viterbi decoding of the rate R=1/2, memory m=1 convolutional encoder with generator matrix G(D)=(1 1+D) when used to communicate over the binary symmetric channel. Their method was later extended to the rate R=1/2, memory m=2 generator matrix G(D)=(1+D^2 1+D+D^2) by Lentmaier et al.
In this paper, we shall use a different approach to derive the exact bit error probability. We derive and solve a general matrix recurrent equation connecting the average information weights at the current and previous steps of the Viterbi decoding. A closed form expression for the exact bit error probability is given. Our general solution yields the expressions for the exact bit error probability obtained by Best et al. (m=1) and Lentmaier et al. (m=2) as special cases. (Less)
Please use this url to cite or link to this publication:
http://lup.lub.lu.se/record/1766856
 author
 Bocharova, Irina ^{LU} ; Hug, Florian ^{LU} ; Johannesson, Rolf ^{LU} and Kudryashov, Boris ^{LU}
 organization
 publishing date
 2011
 type
 Contribution to conference
 publication status
 published
 subject
 conference name
 Information Theory and Applications Workshop (ITA), 2011
 external identifiers

 Scopus:79955757285
 language
 English
 LU publication?
 yes
 id
 692e63b4c8ab4edda24d46c4d04b68e0 (old id 1766856)
 alternative location
 http://ita.ucsd.edu/workshop/11/files/paper/paper_1887.pdf
 date added to LUP
 20110125 12:18:26
 date last changed
 20161013 04:58:19
@misc{692e63b4c8ab4edda24d46c4d04b68e0, abstract = {Forty years ago, Viterbi published upper bounds on both the first error event (burst error) and bit error probabilities for Viterbi decoding of convolutional codes. These bounds were derived using a signal flow chart technique for convolutional encoders. In 1995, Best et al. published a formula for the exact bit error probability for Viterbi decoding of the rate R=1/2, memory m=1 convolutional encoder with generator matrix G(D)=(1 1+D) when used to communicate over the binary symmetric channel. Their method was later extended to the rate R=1/2, memory m=2 generator matrix G(D)=(1+D^2 1+D+D^2) by Lentmaier et al.<br/><br> In this paper, we shall use a different approach to derive the exact bit error probability. We derive and solve a general matrix recurrent equation connecting the average information weights at the current and previous steps of the Viterbi decoding. A closed form expression for the exact bit error probability is given. Our general solution yields the expressions for the exact bit error probability obtained by Best et al. (m=1) and Lentmaier et al. (m=2) as special cases.}, author = {Bocharova, Irina and Hug, Florian and Johannesson, Rolf and Kudryashov, Boris}, language = {eng}, title = {On the exact bit error probability for Viterbi decoding of convolutional codes}, year = {2011}, }