Search

CN-117890906-B - ISAR enhanced imaging method based on AU-Net

CN117890906BCN 117890906 BCN117890906 BCN 117890906BCN-117890906-B

Abstract

An ISAR enhanced imaging method based on AU-Net comprises the following steps of 1, simulating to generate ISAR echo data according to a random scattering point model, constructing an ISAR image training set through random noise adding, downsampling and imaging preprocessing, 2, constructing an AU-Net imaging network for ISAR sparse high-resolution imaging by introducing an attention mechanism into a U-Net structure, initializing network model parameters, 3, designing an ISAR imaging loss function according to the constructed AU-Net imaging network, 4, formulating an ISAR enhanced imaging training strategy to train, updating the AU-Net imaging network model parameters to obtain an optimal ISAR high-resolution imaging model, 5, obtaining simulated/actually measured target ISAR echo data, constructing an ISAR image testing set through preprocessing operation similar to that of the step 1, and realizing ISAR enhanced imaging through the obtained optimal ISAR high-resolution imaging model. The invention can improve ISAR imaging performance under the conditions of low signal-to-noise ratio and data loss, and further improve the recovery capability of weak scattering points.

Inventors

  • KANG HAILONG
  • Xin Rou
  • WEI XIAOXIA
  • Shen Tuoyu
  • LI JUN
  • GAO DAWEI
  • ZHANG JUN
  • TAO HAIHONG
  • LIAO GUISHENG

Assignees

  • 西安电子科技大学
  • 西安电子科技大学杭州研究院

Dates

Publication Date
20260508
Application Date
20240117

Claims (5)

  1. 1. An ISAR enhanced imaging method based on AU-Net is characterized by comprising the following steps of; step 1, generating ISAR echo data according to random scattering point model simulation, and constructing an ISAR image training set through random noise adding, downsampling and imaging preprocessing; Step 2, constructing an AU-Net imaging network for ISAR sparse high-resolution imaging by introducing an attention mechanism into a U-Net structure, and initializing network model parameters; Step 3, designing an ISAR imaging loss function according to the constructed AU-Net imaging network; step 4, an ISAR enhanced imaging training strategy is formulated for the constructed image training set and the imaging loss function to train, and AU-Net imaging network model parameters are updated to obtain an optimal ISAR high-resolution imaging model; Step5, obtaining simulation/actual measurement target ISAR echo data, performing preprocessing operation similar to that of the step 1 to construct an ISAR image test set, and realizing ISAR enhanced imaging through the obtained optimal ISAR high-resolution imaging model; The step2 specifically comprises the following steps of; (2a) m1 comprises four encodings for extracting ISAR image features in X; Wherein the first three encoders each comprise two And one of the convolutions of (2) Is used for the maximum pooling layer of the (c), and an active layer behind each convolution layer; The last encoder is basically the same as the first three structures, but does not contain a maximum pooling layer; (2b) The m2 comprises four decoders, which are used for mapping ISAR image features extracted by the m1 into an ideal scattering point model; each decoder comprises 1 Up-sampling layer, ISAR attention feature fusion layer m4, 2 Is a convolution layer and an activation layer of (1); an attention mechanism is introduced in jump connection of the U-Net structure, an ISAR attention feature fusion layer is formed, namely, the ISAR features output by each encoder are extracted by an attention feature fusion module, key features of an ISAR image are fused with the output ISAR features of the corresponding decoder; (2c) m3 is distributed between each encoder and decoder, and is used for extracting the attention characteristic in m1 and fusing the attention characteristic with the characteristic obtained by m 2; The attention feature fusion module is a cascade of an ISAR channel attention module and an ISAR space attention module, and learns weights of ISAR feature graphs at different stages from two dimensions of a channel and a space respectively; The ISAR channel attention module respectively adopts maximum pooling and average pooling to convert global ISAR image information into channel information, then respectively learns the channel information through a full connection layer and performs information fusion, generates an ISAR channel attention feature map through sigmoid activation operation, multiplies the feature map with an ISAR input feature map F, and generates features required by an ISAR space attention module ; The ISAR spatial attention module outputs features of the ISAR channel attention module As an input feature diagram of the module, the maximum pooling and average pooling operations are respectively carried out, the results are connected based on channel dimensions, and then one is passed The convolution operation of (1) reduces the dimension of the channel number to 1, generates an ISAR space attention feature map through sigmoid, and finally combines the feature map with an input feature map Multiplying to obtain final ISAR attention profile ; (2D) m4 is The convolution layer (conv) carries out channel fusion on the ISAR feature map finally obtained by m2 so as to output an ISAR high-resolution imaging result; Equation 4 Wherein, the For generating a probability vector, The ISAR output profile for the m1 module, An ISAR input feature map of the m2 module; (2e) Connecting the above modules, firstly inputting an ISAR image X to be reconstructed into an m1 module, inputting the next encodings and an attention fusion layer m3 each time when passing through the first three encodings, then inputting the next encodings and the attention fusion layer m3 through a fourth encodings, inputting the next encodings and the attention fusion layer m3 through a dropout layer, respectively fusing the results obtained by each m3 module when passing through the first three encodings, and finally obtaining the final high-resolution ISAR image through an m4 module after passing through the fourth encodings Constructing an AU-Net imaging network for ISAR sparse high-resolution imaging, and randomly initializing a network parameter W; input features The expression is as follows: Equation 5 Wherein, the For sigmoid operation, F represents an input feature map, 、 Respectively representing ISAR channel characteristic diagrams after maximum pooling and average pooling, 、 The weight parameter is the weight parameter of the full connection layer; ISAR attention profile The expression is as follows: Equation 6 Wherein, the Representing the convolution kernel as Is used for the convolution operation of (1), 、 Respectively represent Through the ISAR space feature map after the maximum pooling and the average pooling; In the step3, the imaging loss function is defined as: Equation 7 Wherein, the As a function of the loss of the MSE, For the L1 regularization term, Regularizing the term for L2; , Representing the reconstructed image and the label image respectively, Representing the number of samples in a batch, W being the imaging parameter, 、 Respectively representing a first norm and a second norm, , , Respectively super parameters.
  2. 2. The ISAR enhanced imaging method based on AU-Net according to claim 1, wherein the step 1 specifically includes the steps of; (1a) Under the radar parameter condition, using Matlab to randomly generate 100-1000 random scattering points obeying Gaussian distribution, randomly distributing scattering coefficients of each scattering point in a (0, 1) interval, and taking an ideal scattering point model as a label sample in a training set ; (1B) The target is composed of K scattering points, and the ISAR imaging model simulates and generates corresponding radar echoes after motion compensation and distance compression under each group of scenes : Equation 1 Wherein, the And Indicating range-wise frequency and azimuth-wise slow time respectively, Indicating the target rotational angular velocity of the wheel, , Representing the scattering coefficient of the i-th scattering point, 、 、 Respectively representing pulse width, modulation frequency and wavelength, And The bandwidth and carrier frequency are represented respectively, Representing the original coordinates of the ith scattering point on the target; (1c) Random Gaussian white noise with different signal to noise ratios is added into the echo S respectively, and data with different proportions are randomly extracted in the azimuth direction and the distance direction to obtain an echo Y: Equation 2 Wherein, the 、 、 、 And Respectively representing an echo signal matrix after sparse noise addition, an azimuth down-sampling matrix, a distance down-sampling matrix, an original echo signal matrix and a noise matrix, wherein N represents the number of distance units, M represents the number of azimuth units, Indicating the number of azimuth units after downsampling, Representing the number of distance units after downsampling; (1d) Performing imaging pretreatment on the sparse and noisy echo Y by adopting a distance Doppler (RD) algorithm to obtain an ISAR image X to be reconstructed, taking the result as an input sample of a training set, and integrally enhancing the imaging process to obtain the ISAR image X: Equation 3 Wherein, the Representing the number of samples in a batch, W representing the imaging parameters, 、 Representing the ISAR image to be reconstructed and the label image, respectively.
  3. 3. The ISAR enhanced imaging method based on AU-Net according to claim 2, wherein in the step 2, the AU-Net imaging network for ISAR sparse high resolution imaging includes four parts of an ISAR image feature extraction module m1, an ISAR image restoration module m2, an ISAR attention feature fusion module m3, and an ISAR image output layer m 4.
  4. 4. The ISAR enhanced imaging method based on AU-Net according to claim 1, wherein the implementation of step 4 is as follows: (4a) Inputting the ISAR image X after the imaging pretreatment into an AU-Net imaging network, and calculating layer by layer according to a network cascade sequence to obtain a prediction result ; (4B) And updating and optimizing the network parameter W by using an Adam algorithm, wherein the updating formula is as follows: equation 8 Wherein, the Is the updated network parameter, W is the pre-update network parameter, Is the rate of learning to be performed, Is the bias of the loss function to W, i.e ; (4C) Using updated weights Repeating the calculation process, carrying out repeated iteration update, and saving network parameters with minimum loss in the iteration process And obtaining the optimal ISAR high-resolution imaging model.
  5. 5. The ISAR enhanced imaging method based on AU-Net according to claim 4, wherein the implementation of step 5 is as follows: (5a) Random noise adding and random downsampling are carried out on the simulation/actual measurement data to obtain echo data ; (5B) Using RD algorithm pairs Performing imaging pretreatment to obtain ISAR images to be reconstructed in a test set ; (5C) Will be Inputting into trained optimal imaging model AU-Net to obtain ISAR enhanced imaging result : Equation 9.

Description

ISAR enhanced imaging method based on AU-Net Technical Field The invention belongs to the technical field of radar imaging, and particularly relates to an ISAR enhanced imaging method based on AU-Net. Background Inverse Synthetic Aperture Radar (ISAR) technology can obtain high resolution images of non-cooperative targets in all weather conditions throughout the day. Conventional ISAR imaging is generally based on a Range Doppler (RD) algorithm, and recently compressed sensing (Compressed Sensing, CS) algorithm has led to a great deal of scholars' research and has been widely used in ISAR imaging. Compared with the RD algorithm, the CS algorithm can obtain a high-resolution low-sidelobe reconstructed image under the condition of radar data loss, but the imaging quality is limited by a sparse observation model. In addition, the CS algorithm has higher computational complexity and takes longer time for iterative imaging. With the rapid development of deep learning (DEEP LEARNING, DL) technology, the excellent feature learning and fitting characterization capability of the method provides a new technical means for breaking through the conventional ISAR imaging method. Beginning in 2019, students successively applied DL techniques to the ISAR imaging field, began using different network frameworks such as CV-CNN, FCNN, resNet for ISAR learning imaging, and achieved better performance than CS algorithm. The research shows that the DL technology can explore the complex nonlinear mapping relation between training data and reconstructed images, an implicit imaging model is established, and ISAR imaging quality and imaging efficiency can be further improved under the condition of radar data deficiency. Because of the strong image reconstruction capability of U-Net networks, researchers have proposed enhanced imaging methods incorporating U-Net architecture to improve ISAR imaging quality. Although the DL method described above makes an important progress in terms of improvement of ISAR imaging quality, similar to the CS method, the performance of the DL-type ISAR imaging method is significantly affected by noise and data loss. Furthermore, most of the above methods are based on minimization constraints of mean square error (Mean Square Error, MSE), which tend to result in the reconstructed image being too smooth and losing image details such as weak scattering points. Disclosure of Invention In order to overcome the defects in the prior art, the invention aims to provide an ISAR enhanced imaging method based on AU-Net, which can improve the ISAR imaging performance under the conditions of low signal-to-noise ratio and data loss and further improve the recovery capability of weak scattering points. In order to achieve the above purpose, the technical scheme adopted by the invention is as follows: An ISAR enhanced imaging method based on AU-Net comprises the following steps; step 1, generating ISAR echo data according to random scattering point model simulation, and constructing an ISAR image training set through random noise adding, downsampling and imaging preprocessing; Step 2, constructing an AU-Net imaging network for ISAR sparse high-resolution imaging by introducing an attention mechanism into a U-Net structure, and initializing network model parameters; Step 3, designing an ISAR imaging loss function according to the constructed AU-Net imaging network; step 4, an ISAR enhanced imaging training strategy is formulated for the constructed image training set and the imaging loss function to train, and AU-Net imaging network model parameters are updated to obtain an optimal ISAR high-resolution imaging model; And 5, acquiring simulation/actual measurement target ISAR echo data, constructing an ISAR image test set by performing preprocessing operation similar to that of the step 1, and realizing ISAR enhanced imaging through the obtained optimal ISAR high-resolution imaging model. The step 1 specifically comprises the following steps of; (1a) Under the radar parameter condition, randomly generating 100-1000 random scattering points obeying Gaussian distribution by Matlab, randomly distributing scattering coefficients of each scattering point in a (0, 1) interval, and taking an ideal scattering point model of each group of scenes as a tag sample X L in a training set; ( 1b) The target is composed of K scattering points, the ISAR imaging model simulates and generates corresponding radar echoes S after motion compensation and distance compression under each group of scenes, the random scattering point model is the target needing imaging, the imaging function is not achieved, and the imaging model in the substep refers to simulation of imaging scenes. The two are related to each other in that the random scattering point model needs to be simulated through the imaging model to generate corresponding radar echoes. ) Where f r and t m denote distance-wise frequency and azimuth-wise slow time, respectively, ω denotes the ta