CN-121995383-A - Method and apparatus for processing data associated with a radar system
Abstract
Methods and apparatus for processing data associated with a radar system. A method for processing data associated with a radar system, such as a radar system of a vehicle or infrastructure equipment, to reduce interference has providing a generative model of a variational self-encoder type, wherein the generative model has a first artificial neural network and a second artificial neural network, providing training data having a plurality of different training data sets for training the generative model, training the first artificial neural network and the second artificial neural network using the first set of training data, and further training the second artificial neural network using the second set of training data.
Inventors
- A. M. Ahmed
- F. SCHMIDT
Assignees
- 罗伯特·博世有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20251107
- Priority Date
- 20241107
Claims (16)
- 1. A method, such as a computer-implemented method, for processing data associated with a radar system (12, 22), such as a radar system of a vehicle (10) or infrastructure equipment (20), to reduce interference, the method having providing (100) a generation model (MOD-VAE) of a variable self-encoder type, wherein the generation model (MOD-VAE) has a first artificial neural network (ANN-1) and a second artificial neural network (ANN-2), providing (102) a plurality of different training data sets [ ] is provided 、 、 ) For training said generation model (MOD-VAE) using a first set of [ ] 、 ) Training (104) the first artificial neural network (ANN-1) and the second artificial neural network (ANN-2) using a second set of # ) Further training (106) the second artificial neural network (ANN-2).
- 2. The method according to claim 1, wherein the first artificial neural network (ANN-1) has a first network architecture Φ and is designed to receive the first radar signal @, for example in the form of a frequency spectrum, for example in the form of a range-doppler representation ) A first radar signal, for example potentially erroneous, for example having interference, is generated, for example as first input data (ED-1), and first output data (AD-1) is generated on the basis of the first input data (ED-1), which characterizes a characteristic representation associated with a probability distribution For example according to Where θ represents a trainable parameter of the first network architecture phi, such as a weight, where, Representing the mean of the probability distribution, and wherein, The standard deviation representing the probability distribution, wherein, for example, the method has receiving (110) the first radar signal # ) -Generating (112) the first output data (AD-1) by means of the first artificial neural network (ANN-1).
- 3. The method according to claim 2, wherein the first artificial neural network (ANN-1) has a plurality of layers, wherein at least some layers have and/or represent at least one of a) a linear layer, or b) a convolutional layer, or c) a non-linear activation layer and/or a self-attention layer, such as self-attention layer.
- 4. The method according to at least one of claims 2 to 3, wherein the second artificial neural network (ANN-2) has a second network architecture ψ training dataset 、 、 And is designed to be based on the passing mean value And standard deviation The samples x extracted from the characterised probability distribution generate second output data (AD-2) which characterise a second radar signal # ) For example in the form of a frequency spectrum, for example in the form of a range-Doppler representation, for example in terms of Wherein e.g. η represents a trainable parameter, e.g. a weight, of the second network architecture ψ, wherein e.g. the method has the steps of providing (114) the sample x, e.g. in terms of -Generating (116) the second output data (AD-2) by means of the second artificial neural network (ANN-2).
- 5. The method according to at least one of the preceding claims, wherein the training (104) and/or the further training (106) has a training method (TV) using (120) complete supervision (e.g. fully supervised), e.g. based on at least one interference-free signal # ) And at least one signal with interference, e.g. potentially with interference, e.g. the first radar signal # )。
- 6. The method according to at least one of the preceding claims, wherein the training (104) and/or the further training (106) has a usage (122) Loss Function (LF) in terms of , wherein, Characterizing the Loss Function (LF), wherein , wherein, Characterizing the entire uncertainty network associated with the model (MOD-VAE), wherein λ characterizes a hyper-parameter, wherein λ >0, wherein KL characterizes a Kullback-Leibler divergence in terms of , wherein, Characterizing a specifiable distribution, such as a "target distribution", e.g. a characteristic spatial distribution of the radar signal, wherein, for example, the Loss Function (LF) can be adapted (124) to another specifiable target distribution, e.g. by combining Alternative is an additional distribution, for example a uniform distribution, for example on an n-dimensional hypercube with a side length of 1.
- 7. The method according to at least one of the preceding claims, having the step of determining (130) at least one training data set of the Training Data (TD) by means of simulation, for example in case a simulation environment (SU) is used (130 a) 、 ) For example, three different training data sets are determined (130 b) by means of the simulation 、 、 ) Wherein, for example, the different training data sets [ ] 、 、 ) With respectively different complexity, for example in terms of possible interference.
- 8. The method according to at least one of the preceding claims, having the steps of generating (140) a plurality (BATCH), for example a BATCH, of pairs of potentially disturbed signals and undisturbed signals, based on the Training Data (TD), replacing (142) at least some, for example all, of the plurality (BATCH) with corresponding undisturbed signals with a specifiable probability, updating (144) trainable parameters of at least one of the two artificial neural networks (ANN-1, ANN-2) based on a Loss Function (LF), for example a gradient of the loss function, wherein the Loss Function (LF) evaluates a fitness between an output signal of the generation model (MOD-VAE) and an undisturbed signal, and optionally repeating (146) at least one of the steps of a) generating (140), or b) updating (144), for example repeating (146 a) until a convergence criterion is fulfilled, for example based on a verification, or at least one of the training criteria (146) is generated, or (140).
- 9. The method according to claim 8, having at least one of the following elements a) initializing (150) parameters, such as weights, of the generation model (MOD-VAE), or b) training (152) the trainable parameters θ of the first network architecture phi and the trainable parameters η of the second network architecture phi using the training method (TV) and a first training dataset (D 2 ), or c) further training (154) the trainable parameters θ of the first network architecture phi and the trainable parameters η of the second network architecture phi using the training method (TV) and a second training dataset (D 1 ), or D) further training (156) the trainable parameters η of the second network architecture phi using the training method (TV) and a third training dataset (D 0 ), such as only further training the trainable parameters of the second network architecture.
- 10. An apparatus (200; 200') designed to perform the method according to at least one of the preceding claims.
- 11. A vehicle (10) having at least one apparatus (200) according to claim 10.
- 12. An apparatus (20), such as an infrastructure apparatus, such as a road infrastructure apparatus, such as a roadside unit, having at least one device (200') according to claim 10.
- 13. A computer readable Storage Medium (SM) comprising instructions (PRG) which, when executed by a computer (202), cause the computer to perform the method according to at least one of claims 1 to 9.
- 14. A computer Program (PRG) comprising instructions which, when executed by a computer (202), cause the computer to perform the method according to at least one of claims 1 to 9.
- 15. A Data Carrier Signal (DCS) transmitting and/or characterizing a computer Program (PRG) according to claim 14.
- 16. Use (300) of at least one of the method according to at least one of claims 1 to 9 and/or the apparatus (200; 200') according to claim 10 and/or the vehicle (10) according to claim 11 and/or the apparatus (20) according to claim 12 and/or the computer readable Storage Medium (SM) according to claim 13 and/or the computer Program (PRG) according to claim 14 and/or the Data Carrier Signal (DCS) according to claim 15 for a) operating (301) a radar system, such as a radar system of a vehicle (10), or b) reducing (302) disturbances, such as disturbance repair, or c) increasing (303) the efficiency of training, or d) reducing (304) the amount of information required for training.
Description
Method and apparatus for processing data associated with a radar system Technical Field The present disclosure relates to a method for processing data associated with a radar system. The present disclosure also relates to an apparatus for processing data associated with a radar system. Disclosure of Invention Certain examples relate to a method, e.g., a computer-implemented method, for processing data associated with a radar system, e.g., a radar system of a vehicle or infrastructure equipment, to reduce interference, the method having providing a generative model of a variational self-encoder type, wherein the generative model has a first artificial neural network and a second artificial neural network, providing training data having a plurality of different training data sets for training the generative model, training the first artificial neural network and the second artificial neural network using the first set of training data, and further training the second artificial neural network using the second set of training data. In some examples, this enables more efficient reduction of interference with respect to radar signals of the radar system, e.g., compared to some conventional methods. In some examples it is provided that the first artificial neural network has a first network architecture phi and is designed to receive the first radar signal, for example a potentially erroneous, for example interfering, first radar signal, for example in the form of a frequency spectrum, for example in the form of a range-doppler representation, for example as first input data, and to generate first output data, based on the first input data, which represent a characteristic representation r associated with a probability distribution, for example in terms ofWhere θ represents a trainable parameter of the first network architecture phi, such as a weight, where,Represents the mean of the probability distribution, and wherein,The standard deviation of the probability distribution is represented, wherein the method comprises, for example, receiving a first radar signal and generating first output data by means of a first artificial neural network. For example, the first artificial neural network has a plurality of layers, wherein at least some of the layers have and/or represent at least one of a) a linear layer, or b) a convolutional layer, or c) a nonlinear-active layer and/or a self-attention layer, such as self-attention layer. In other examples, similar situations may also apply to the second artificial neural network. In some examples, the second artificial neural network has a second network architecture ψ and is designed to be based on a pass-through mean valueAnd standard deviationThe samples x extracted in the characterized probability distribution generate second output data, which characterize the second radar signal, for example in the form of a frequency spectrum, for example in the form of a range-doppler representation, for example in terms ofWherein e.g. eta represents a trainable parameter, e.g. a weight, of the second network architecture ψ, wherein e.g. the method has the steps of providing a sample x, e.g. in terms ofAnd generating second output data by means of a second artificial neural network. Thus, in other examples, a synthetic radar signal may be generated that has a lower level of interference than the first radar signal, for example. In some examples, the training and/or the further training has a training method using full supervision (e.g., fully supervised), e.g., based on at least one non-interfering signal and at least one interfering signal, e.g., a potentially interfering signal, e.g., a first radar signal. In some examples, the training and/or the further training has using loss functions in terms of, wherein,Characterizing a loss function, wherein, wherein,Characterizing the entire uncertainty network associated with the model, wherein λ characterizes a hyper-parameter, wherein λ >0, wherein KL characterizes a Kullback-Leibler divergence in terms of, wherein,Characterizing a specifiable distribution, e.g. a "target distribution", e.g. a characteristic spatial distribution of radar signals, wherein e.g. a loss function can be adapted to another specifiable target distribution, e.g. by adaptingAlternative is an additional distribution, for example a uniform distribution, for example on an n-dimensional hypercube with a side length of 1. In some examples, the method has, for example in the case of using a simulation environment, determining at least one training data set of training data by means of a simulation, for example three different training data sets by means of a simulation, wherein for example the different training data sets have respectively different complexities, for example in terms of possible disturbances. In some examples it is provided that the method has the steps of generating a plurality, e.g. a batch (e.g. "batch") of pairs of potentially disturbed signals and