CN-117254880-B - Acceleration of S-polarized ECC throughput by scheduler
Abstract
Acceleration of S-polarized ECC throughput by a scheduler is disclosed. A method for simplified continuous elimination list (SSCL) error decoding of S-polarized codes includes representing the S-polarized code as a perfect binary tree, providing node v with soft information vectors from parent nodes Soft information vector of left child node of computing node v Providing node v with a hard decision vector from the left child node And with Using it together to create soft information vectors And passes it to the right child node of node v, which is provided with the hard decision vector from its right child node And with It is used together to create and deliver to its parent node a hard decision vector of hard decisions, beta v , and when v is the ith leaf node of the perfect tree, both path metrics are updated and the path obtained by expanding the current path is selected with the lowest path metric.
Inventors
- AMIT BERMAN
- Charity. Buzaglo
- Ariel dobuchak
Assignees
- 三星电子株式会社
Dates
- Publication Date
- 20260512
- Application Date
- 20230202
- Priority Date
- 20220616
Claims (19)
- 1. A method of SSCL error coding of S-polarized codes, said SSCL simplifying a successive cancellation list, said method comprising: Representing an S-polarized code of length n=2 n as a perfect binary tree with 2N-1 nodes, where N is a non-negative integer, wherein 1L for the first path of a list of L paths through the perfect binary tree, where the S-polarized code is a GCC with an RS code as its outer code and a polarized code as its inner code, the RS being reed-solomon, the GCC being a generalized concatenated code, wherein S-1 outer codes and S inner codes are used to encode information into an N x J array, where S and J are non-negative integers, wherein the GCC is encoded in S stages, where at each stage S new information bits and parity bits from the previous outer code are encoded into C (s) And storing as A rows of the array, wherein C (s) is the codeword at stage s, wherein Is the amount of data in the outer codeword at phase s, Is the amount of data in the outer codeword at stage s+1, where the encoded rows are mapped into a coset array where the current of the coset array is systematically addressed at stage s+1 The columns are encoded, and wherein parity bits of the obtained codeword are transmitted to the next inner code for encoding; Providing a soft information vector of length 2 d from a parent node v p to a node v in a decoding path l at a depth d in the perfect binary tree ; For each path in the list of paths in the perfect binary tree, a soft information vector of length 2 d-1 for the left child node v l of node v is computed ; Providing node v with a hard decision vector of length 2 d-1 from the left child node And use the vectors together And the vector To create a soft information vector of length 2 d-1 And vector the Transmitting the data to the right child node of the node v; Providing node v with a hard decision vector of length 2 d-1 from its right child node And use the vectors together And the vector To create a hard decision vector beta v of length 2 d and pass the vector beta v to its parent node; When v is the ith leaf node of the perfect binary tree, 0≤i <2B, then for each path in the list of paths, according to And To update two path metrics, wherein the path metrics Is the log-likelihood ratio of the correctly decoded path through leaf node i, path metric The path l through leaf node i-1 is the log-likelihood ratio representing the correctly decoded path of the codeword, and the path metric The path 2 l+1 through leaf node i is a log likelihood ratio representing the correctly decoded path of the codeword; Selecting L paths having the lowest path metric among 2L paths obtained by expanding the current L paths with 0 bits or with 1 bit, and If the first path is extended by 0 bits, the vector is then Set to 0, otherwise, the vector is Set to 1 in which Is a hard decision vector of length 2 d from parent node v p .
- 2. The method of claim 1, further comprising, when the node v is a frozen leaf node, expanding all paths by 0 bits and indexing 1L for each path, the vector Set to 0 where the frozen leaf node forces the hard decision to zero.
- 3. The method of claim 2, further comprising, for a node that is the root of a subtree having only frozen leaf nodes, a RATE-0 node, for a node that is the root of a subtree having information leaf nodes, a RATE-1 node, and for a node that is the REP node, of its leaf nodes, except that the rightmost leaf node is the root of a subtree of a frozen leaf node, updating the L best path metrics, and computing the corresponding hard decision vector without accessing other nodes in the subtree of the perfect binary tree.
- 4. A method according to claim 3, wherein for both RATE-0 and REP nodes, updating the L best path metrics is performed in parallel for all paths in the list, wherein the latency of the updating depends on the depth of the node and not on the list size.
- 5. A method according to claim 3, wherein updating the L best path metrics comprises, for a RATE-1 node of depth d, sequentially computing m = min { L-1; 2 n-d } bits of the hard decision, wherein the latency of the updating depends on the list size.
- 6. The method of claim 1, further comprising: Selecting t paths for those leaf nodes whose correct path is likely to be among the best t paths, where t < L; applying a CRC detector on the kth leaf node, and Decoding continues until the number of paths equals one.
- 7. The method of claim 6, wherein determining those leaf nodes whose correct paths are likely to be among the best t < L paths includes one or more of selecting those paths whose path metrics are greater than a predetermined threshold, selecting the best t < L coding paths using machine learning, selecting the best t < L coding paths using a classifier, or selecting the best t < L coding paths using forward prediction for each path.
- 8. The method of claim 6, wherein different paths are processed in parallel, each path being associated with a memory storing 2N-1 soft decision values, and each memory being assigned a unique processing block that performs soft information along the decoding tree.
- 9. The method of claim 6, wherein each node in the decoding tree stores soft information and hard decisions, wherein hard decisions are represented by one bit, soft information is represented by a plurality of bits, and the method further comprises For the node at depth d, 2 n-d soft information and 2 n-d hard decisions for each path in the list are saved, and And pruning paths in the decoding tree.
- 10. A method of error coding an S-polarized code, the method comprising: Representing an S-polarized code of length n=2 n as a perfect binary tree with 2N-1 nodes, wherein 1L for the first path of a list of L paths through the perfect binary tree, wherein the S-polarized code is a GCC with an RS code as its outer code and a polarized code as its inner code, the RS being reed-solomon, the GCC being a generalized concatenated code, wherein S-1 outer codes and S inner codes are used to encode information into an N x J array, wherein the GCC is encoded in S stages, wherein at each stage S new information bits and parity bits from a previous outer code are encoded into C (s) The code words are stored as A rows of the array, wherein C (s) is the code word at stage s, wherein Is the amount of data in the outer codeword at phase s, Is the amount of data in the outer codeword at stage s+1, where the encoded rows are mapped into a coset array where the current of the coset array is systematically addressed at stage s+1 The columns are encoded, and wherein parity bits of the obtained codeword are sent to the next inner code for encoding; Submitting the plurality of frames to a plurality of row decoders and counting the number of frames successfully decoded, wherein a frame is a plurality of rows of an S-polarized code array decoded as a codeword of C (0) ; Performing RS decoding of the codeword of C (0) when the number of successfully decoded frames reaches K+2m, the RS being Reed-Solomon, where K is the dimension of the codeword of C (0) and m is the uncorrected number, while decoding the other multiple frames at the same stage, and Wherein when the RS decoding is successful, a coset from the RS decoding is used to decode the upcoming plurality of frames at a next stage of decoding, Otherwise, RS decoding is repeated for another plurality of frames.
- 11. The method of claim 10, wherein decoding each row of the S-polarized code array into a codeword of C (0) comprises: Providing a soft information vector of length 2 d from parent node v p to node v in decoding path l at depth d in the perfect binary tree ; For each path in the list of paths in the perfect binary tree, a soft information vector of length 2 d-1 for the left child node v l of node v is computed ; Providing a hard decision vector of length 2 d-1 from the left child node to node v And uses the vectors together And the vector To create a soft information vector of length 2 d-1 And vector the Transmitting the data to the right child node of the node v; Providing a hard decision vector of length 2 d-1 from the right child node of node v to node v And uses the vectors together And the vector To create a hard decision vector beta v of length 2 d and pass the vector beta v to the parent node of node v; When v is the ith leaf node of the perfect binary tree, 0≤i <2B, then for each path in the list of paths, according to And To update two path metrics, wherein the path metrics Is the log-likelihood ratio of the correctly decoded path through leaf node i, path metric The path l through leaf node i-1 is the log-likelihood ratio representing the correctly decoded path of the codeword, and the path metric The path 2 l+1 through leaf node i is a log likelihood ratio representing the correctly decoded path of the codeword; Selecting L paths having the lowest path metric among 2L paths obtained by expanding the current L paths with 0 bits or with 1 bit, and If the first path is extended by 0 bits, the vector is then Set to 0, otherwise, the vector is Set to 1 in which Is a hard decision vector of length 2 d from parent node v p .
- 12. The method of claim 11, further comprising expanding all paths by 0 bits when v is a frozen leaf node, and indexing 1L for each path, the vector Set to 0 where the frozen leaf node forces the hard decision to zero.
- 13. The method of claim 12, further comprising, for a node that is the root of a subtree having only frozen leaf nodes, a RATE-0 node, for a node that is the root of a subtree having information leaf nodes, a RATE-1 node, and for nodes that are the root of a subtree whose leaf nodes are all frozen leaf nodes, except for the rightmost leaf node, a REP node, updating the L best path metrics, and computing the corresponding hard decision vector without accessing other nodes in the subtree of the perfect binary tree.
- 14. The method of claim 13, wherein updating the L best path metrics is performed in parallel for all paths in the list for both RATE-0 nodes and REP nodes, wherein a latency of the updating depends on a depth of the nodes and not on a list size.
- 15. The method of claim 13, wherein updating the L best path metrics comprises, for a RATE-1 node of depth d, sequentially computing m = min { L-1; 2 n-d } bits of the hard decision, wherein the latency of the updating depends on a list size.
- 16. The method of claim 11, further comprising: Selecting t paths for those leaf nodes whose correct path is likely to be among the best t paths, where t < L; applying a CRC detector on the kth leaf node, and Decoding continues until the number of paths equals one.
- 17. The method of claim 16, wherein determining those leaf nodes whose correct paths are likely to be among the best t < L paths comprises one or more of selecting those paths whose path metrics are greater than a predetermined threshold, selecting the best t < L coding paths using machine learning, selecting the best t < L coding paths using a classifier, or selecting the best t < L coding paths using forward prediction for each path.
- 18. The method of claim 16, wherein different paths are processed in parallel, each path being associated with a memory storing 2N-1 soft decision values, and each memory being assigned a unique processing block that performs soft information along the decoding tree.
- 19. The method of claim 16, wherein each node in the coding tree stores soft information and hard decisions, wherein hard decisions are represented by one bit, soft information is represented by a plurality of bits, and the method further comprises For the node at depth d, 2 n-d soft information and 2 n-d hard decisions for each path in the list are saved, and And pruning paths in the decoding tree.
Description
Acceleration of S-polarized ECC throughput by scheduler Technical Field Embodiments of the present disclosure relate to a method of performing error correction in digital communications with reduced latency, and to a hardware implementation of the method. Background S-polarized (S-Polar) is a generalized concatenated code (Generalized Concatenated Code, GCC) with Reed-Solomon (RS) code as its outer code and a polarized code as its inner code. In GCC, S-1 outer codes and S inner codes are used to encode information into an NxJ array. Inner codeIs a linear nested code, i.e., for 1≤s≤S-1, the S-th code is included in the S-1-th code. In particular, ifIs the dimension of the s-th code, thenOuter codeIs assumed to have a length J and a dimensionIs a systematic code of (a). When considering outer codes, for phases 0 and S, include having dimensions respectivelyAndIs out of the plain of two the code is convenient. In dimension ofThe S-th outer code (1. Ltoreq.s < S) is defined above the extension field of GF (2). The inner code encodes the information of the outer code and some parity bits into rows of the array. The code lines are mapped to a coset (coset) field on which the outer code operates. Cosets of row codewords are lengthsSuch that for each stage s, the vector is preceded byThe number of bits allows the codeword of C (0) to be reduced to the codeword of C (s), and thus the side information (side information) from the coset increases the correctability of the row. More specifically, the encoding of the GCC is performed in S phases, where in each phase S new information bits and parity bits from the previous outer code are encoded to C (s)And stored as n rows of an array. The code rows are mapped into coset fields. The s+1st outer code systematically pairs the current coset arrayThe columns are encoded. The parity bits of the obtained codeword are transmitted to the next inner code for encoding. Fig. 1 illustrates the structure of a codeword of nxj GCC having S stages. White cells represent information symbols, while gray cells represent parity bits. Disclosure of Invention According to an embodiment of the present disclosure, there is provided a method of simplified continuous elimination list (SSCL) error coding of S-polarized codes, comprising representing an S-polarized code of length N=2 n as a perfect binary tree with 2N-1 nodes, where N is a non-negative integer, wherein 1≤l≤L for the first path in a list of L paths through the binary tree, where the S-polarized code is a Generalized Concatenated Code (GCC) with a Reed-Solomon (RS) code as its outer code and a polarized code as its inner code, where S-1 outer codes and S inner codes are used to encode information into an NxJ array, where S and J are non-negative integers, where GCC is encoded in S stages, where at each stage S new information bits and parity bits from the previous outer code are encoded to C (s)In the n codewords and stored as n rows of the array, where C (s) is the codeword at stage s, whereIs the amount of data in the outer codeword at stage s, where the encoded rows are mapped into a coset array in which the (s+1) th outer code systematically addresses the current coset arrayA number of columns are encoded and wherein parity bits of the obtained codeword are sent to the next inner code for encoding, providing a node v in the decoding path l at depth d in the perfect binary tree with a vector of length 2 d of soft information from a parent node v pFor each path in the list of paths in the binary tree, a vector of length 2 d-1 of soft information for left child node v l of node v is calculatedProviding node v with a vector of length 2 d-1 of hard decisions from the left child nodeAnd use the vectors togetherAnd the vector isTo create a soft information vector of length 2 d-1And vector theTo the right child node of node v, to provide node v with a vector of length 2 d-1 of hard decisions from its right child nodeAnd use the vectors togetherAnd the vector isTo create a hard decision vector beta v of length 2 d of hard decision and pass said vector beta v to its parent node, 0≤i <2n when v is the ith leaf node of said perfect tree, then for each path in the list of paths, according toAndTo update two path metrics, whereinIs represented by path (L) of leaf node i, L paths of 2L paths obtained by expanding current L paths with 0 bit or 1 bit are selected as the lowest path metric, and the vector is expanded if the first path is expanded with 0 bitSet to 0, otherwise, the vector isSet to 1. According to yet another embodiment of the present disclosure, the method includes expanding all paths by 0 bits when v is a frozen leaf node, and indexing 1≤l≤l for each path, the vectorSet to 0 where the frozen leaf node forces the hard decision to zero. According to yet another embodiment of the present disclosure, the method comprises, for a node that is the root of a subtree having only frozen leaf nodes (RATE-0 node), for a node that is the root