DE-102024133216-A1 - Method, computer program product and test device for generating a test data set for testing a receiver and test data set
Abstract
The invention relates to a method for generating a test data set (TDS) for testing, in particular for adapting, a receiver (4), comprising the steps: (S200) Applying the test data set (TDS) to the receiver (4), (S300) Capture a receiver-side data set (EDS) based on the test data set (TDS), (S400) Evaluate the receiver data set (EDS) to determine a plurality of erroneously received symbols (FES) and their respective positions (P) in the receiver data set (EDS), (S500) Forming a plurality of fault-causing data sets (FVS) by copying a respective segment of predetermined length of the test data set (TDS) around the respective faulty received symbols (FES), (S600) Evaluate the fault-causing records (FCR) to determine a plurality of fault-causing positions (FCP) and/or fault-causing patterns (FCP) and generate at least one refined fault-causing record (FCR) based on the fault-causing positions (FCP) and/or fault-causing patterns (FCP), (S700) Modifying the test data set (TDS) by embedding at least one refined error-causing data set (VFVS) into the test data set (TDS).
Inventors
- Alexander Schmitt
- Valentina Unakafova
- Anton Unakafov
- Fabian Deuchler
Assignees
- BitifEye Digital Test Solutions GmbH
Dates
- Publication Date
- 20260513
- Application Date
- 20241113
Claims (14)
- Method for generating a test data set (TDS) for testing, in particular for adaptation, a receiver (4), comprising the steps: (S200) Applying the test data set (TDS) to the receiver (4), (S300) Capturing a receiver-side data set (EDS) based on the test data set (TDS), (S400) Evaluating the receiver-side data set (EDS) to determine a plurality of erroneously received symbols (FES) and their respective positions (P) in the receiver-side data set (EDS), (S500) Forming a plurality of error-causing data sets (FVS) by copying a respective section of predetermined length from the test data set (TDS) around the respective erroneously received symbols (FES), (S600) Evaluating the error-causing data sets (FVS) to determine a plurality of error-causing positions (FVP) and/or (S700) Determine error-causing patterns (FVM) and generate at least one refined error-causing data set (VFVS) based on the error-causing positions (FVP) and/or error-causing patterns (FVM), Modify the test data set (TDS) by embedding at least one refined error-causing data set (VFVS) into the test data set (TDS).
- Procedure according to Claim 1 , wherein step (S600) generating at least one refined fault-causing record (VFVS) by modifying the respective fault-causing records (FVS) at a different position (P) than the position (P) of the faulty received symbol (FES) comprises the following steps: (S630) generating at least one evaluation record (ADS) by copying the respective symbols (S) of the plurality of fault-causing records (FVS) at the respective same position (P) of the plurality of fault-causing records (FVS), and (S640) evaluating the evaluation record (ADS) to determine a symbol (S) for the new refined fault-causing record (VFVS) at position (P).
- Procedure according to Claim 1 or 2 , wherein the step (S600) generating at least one refined fault-causing record (VFVS) by modifying the respective fault-causing records (FVS) at a different position (P) than the position (P) of the faultily received symbol (FES) includes the following step: (S610) sorting the fault-causing records (FVS) according to fault frequency.
- Procedure according to one of the Claims 1 until 3 , wherein the step (S600) generating at least one refined fault-causing record (VFVS) by modifying the respective fault-causing records (FVS) at a different position (P) than the position (P) of the faultily received symbol (FES) includes the following step: (S620) grouping the fault-causing records (FVS).
- Procedure according to one of the Claims 1 until 4 , wherein the step (S600) generating at least one refined fault-causing record (VFVS) by modifying the respective fault-causing records (FVS) at a different position (P) than the position (P) of the faulty received symbol (FES) comprises the following step: (S650) modifying the position of the fault-causing pattern (FVM) in the fault-causing records (FVS) to determine a refined fault-causing record (VFVS).
- Procedure according to one of the Claims 1 until 5 , wherein the step (S600) generating at least one refined fault-causing data set (VFVS) by modifying the respective fault-causing data sets (FVS) at a different position (P) than the position (P) of the faultily received symbol (FES) includes the following step: (S660) balancing the test data set (TDS).
- Computer program product, trained to execute a procedure according to one of the Claims 1 until 6 .
- Test data set (TDS), determined according to a procedure based on one of the Claims 1 until 6 .
- Test device (2) for generating a test data set (TDS) for testing, in particular for adapting, a receiver (4), wherein the test device (2) is configured to supply the receiver (4) with a test data set (TDS), to acquire a receiver-side data set (EDS) based on the test data set (TDS), to evaluate the receiver-side data set (EDS) to determine a plurality of erroneously received symbols (FES) and their respective positions (P) in the receiver-side data set (EDS), to form a plurality of error-causing data sets (FVS) by copying a respective section of predetermined length from the test data set (TDS) around the respective erroneously received symbols (FES), to evaluate the error-causing data sets (FVS) to determine a plurality of error-causing positions (FVP), and to generate at least one refined error-causing data set (VFVS) based on the to modify the error-causing positions (FVP) and the test data set (TDS) by embedding at least one refined error-causing data set (VFVS) into the test data set (TDS).
- Test device (2) according to Claim 9 , wherein the test device (2) is configured to generate at least one evaluation data set (ADS) by copying the respective symbols (S) of the plurality of error-causing data sets (FVS) at the respective same position (P) of the plurality of error-causing data sets (FVS) and to evaluate the evaluation data set (ADS) in order to determine a symbol (S) for the refined error-causing data set (VFVS) at the position (P).
- Test device (2) according to Claim 9 or 10 , wherein the test device (2) is configured to sort the error-causing data sets (FVS) according to error frequency.
- Test device (2) according to Claim 9 , 10 or 11 , wherein the test device (2) is designed to group the fault-causing data sets (FVS).
- Test device (2) according to one of the Claims 9 until 12 , wherein the test device (2) is configured to change the position of the fault-causing pattern (FVM) in the fault-causing data sets (FVS) in order to determine a refined fault-causing data set (VFVS).
- Test device (2) according to one of the Claims 9 until 13 , wherein the test device (2) is configured to balance the test data set (TDS).
Description
The invention relates to a method, a computer program product and a test device for generating a test data set for testing a receiver, as well as the test data set. Since 2010, data rates for modern high-speed serial interfaces (HSS) have doubled every 3 to 4 years. Data rates exceeding 40 Gbit/s are now achieved by several high-speed interfaces, such as PCIe 6 and 7, IEEE 802.3ck and dj, and MIPI M-PHY G6. However, conventional receiver matching and verification methods are not suitable for fully digital receivers, especially at data rates above 100 Gbit/s. Conventional matching and verification methods evaluate the output signal after receiver equalization to determine metrics such as eye or bathtub diagrams or the signal-to-noise ratio (SNR). These methods are not suitable for newer equalization techniques, such as Maximum Likelihood Symbol Detection (MLSD), because discrete symbols are provided at the output, unlike signals used in Continuous Time Linear Equalization (CTLE) or Decision Feedback Equalization (DFE). Testing high-speed interfaces, such as PCIe 7 and IEEE 802.3dj, is extremely challenging. Firstly, the bandwidth of test equipment, such as oscilloscopes, signal analyzers, and vector network analyzers, increases more slowly than the transmission rates. For example, few commercially available oscilloscopes offer sufficient bandwidth for calibrating a PCIe 7 test signal at 100 GHz and above. Secondly, at high transmission rates, it becomes increasingly difficult to compensate for the effects of test equipment and fixtures on the signal. These issues may necessitate replacing traditional compliance tests with on-chip tests/self-test functions, as is already the case with UCIe interfaces due to their very short transmission channels. The error rates of modern receivers cannot be reliably measured with test sequences of practical length below 10<sup>10</sup> symbols. There are two main reasons for this: 1. The probability of an error occurring in a specific symbol within a data stream depends on many symbols before and after it. One cause of this is reflections resulting from impedance errors in the transmission channel. To ensure that the method reliably detects reflections, test data should include all symbol sequences that exhibit the effects of reflections. Since the positions of the reflections are not known in advance, the test data should contain all possible subsequences of a given length. 2. The multi-level signal encodings used today mean that each symbol does not have only two possible values, as in the previously used non-return-to-zero (NRZ) encoding, but four, as in the case of PAM4, and eight, as in the case of PAM8, etc. This means that a test data set containing all symbol subsequences of a certain length must be much longer than with a non-return-to-zero encoding. For PCIe 5-7, one of the sources of reflections from the connectors between the add-in card and the system board is the connector. The typical propagation delay between the connector and the die on the add-in card is approximately t = 0.7 ns, so reflections from a connector occur with a delay of 2t = 1.4 ns. For PCIe 5, 6, and 7, this time is 45 UI (unit intervals), 45 UI, and 90 UI, respectively, which defines the minimum length of the symbol sequences that should be used for testing. For non-return-to-zero encoding, the test data set combining all binary sequences of 45 UI is 2 ^45 ≈ 32.10^ 12 symbols long. Testing PCIe 5 RX with such a test data set would take only 1000 s, less than half an hour. For PAM4, the test data set is 445 ≈ 1027 symbols long. Testing PCIe 6 RX with this data set would take millions of years. For PCIe 7, the situation is even worse, as the test data set is 490 symbols long. Measuring the symbol error rate (SER) presents a challenge in such high-speed interfaces. The receiver's behavior is characterized by the long-term or "true symbol error rate," defined as the ratio of symbols incorrectly received by the receiver to the total number of symbols transmitted across all possible data sequences. The symbol error rate measured for a test dataset should be greater than or equal to the true symbol error rate. However, the practical test datasets used contain only a tiny percentage of the possible symbol sequences and do not allow for a correct assessment of the true symbol error rate. Problematic symbol sequences are (very likely) underrepresented in the test dataset, so there is a high probability that the measured symbol error rate underestimates the true symbol error rate. It is therefore the purpose of the invention to show ways in which improvements can be achieved here. The object of the invention is achieved by a method for generating a test data set for testing, in particular for adapting, a receiver, comprising the following steps: Applying the test data set to the receiver, Capturing a receiver-side data set based on the test data set, Evaluating the receiver's data set to determine a plu