Search

CN-122017841-A - Physical-guided space deep learning forest canopy height inversion method

CN122017841ACN 122017841 ACN122017841 ACN 122017841ACN-122017841-A

Abstract

The invention provides a physical-guided spatial deep learning forest canopy height inversion method, which belongs to the technical field of forest canopy height inversion, and comprises the steps of estimating complex coherence based on multi-baseline total polarization interference synthetic aperture radar data, inverting a phase separately based on a coherent scattering model, determining complex coherence of bulk scattering dominant, inverting to obtain an initial canopy height, calculating a vertical wave number, constructing patched input containing PolInSAR characteristic channels and physical anchor point channels, carrying out end-to-end prediction on patches, limiting the output height in a preset physical range, forming a joint loss function by confidence modulation supervision loss, low confidence total variation smoothing regularization and low frequency consistency constraint in a model training stage, and adopting exponential sliding average parameters and Hann weighting sliding windows to fuse and output a graph canopy height product in an inference stage. The invention has physical interpretability and spatial consistency, can obviously reduce stripe/block artifacts and improve the inversion precision of the height of the canopy and the continuity of the whole map.

Inventors

  • LUO HONGBIN
  • SU SHIQI
  • WU YONG
  • OU GUANGLONG
  • Xu Yuansu
  • YU ZHIBO
  • LIU ZHI
  • LU CHI

Assignees

  • 西南林业大学

Dates

Publication Date
20260512
Application Date
20260407

Claims (10)

  1. 1. A physically guided spatial deep learning forest canopy height inversion method, the method comprising: Acquiring multi-baseline full-polarization interference synthetic aperture radar PolInSAR data, estimating complex coherence observance based on the PolInSAR data, and calculating a vertical wave number; Based on the complex coherence observables and the vertical wave numbers, performing three-stage inversion based on a random bulk scattering-surface scattering hybrid model RVoG to separately phase and invert to obtain an initial canopy height, while obtaining bulk scattering dominant complex coherence; constructing a multi-channel feature map comprising PolInSAR feature channels and physical anchor point channels based on the bulk scattering dominant complex coherence, the ground phase, the vertical wave number and the initial canopy height, and performing patched clipping on the multi-channel feature map to obtain feature patches; Taking a reference height label formed by LiDAR-RH100 as supervision information, inputting the characteristic patch into a spatial convolution neural network for end-to-end prediction, constructing a confidence map based on coherent information, and training the spatial convolution neural network by adopting a combined loss function formed by confidence modulation supervision loss, low confidence total variation smoothing regularization and low-frequency consistency constraint to obtain a canopy height inversion model; Inputting the multichannel feature map corresponding to the region to be tested into a trained canopy height inversion model, executing Sigmoid-Scaling mapping on model output, adopting a network of index moving average EMA parameter version and Hann weighting sliding window fusion to perform seamless reasoning, and outputting the whole gallery forest canopy height product.
  2. 2. The physically guided spatial deep learning forest canopy height inversion method of claim 1, wherein estimating complex coherence observables based on the PolInSAR data comprises: registering and interfering with the main image and the auxiliary image in the PolInSAR data; Carrying out complex coherence estimation on the main image and the auxiliary image in a preset window to obtain a complex coherence observed quantity gamma; Decomposing the complex coherence observed quantity gamma, and outputting a real part (gamma), an imaginary part imag (gamma) and a coherence amplitude |gamma|; and taking the real part (gamma), the imaginary part imag (gamma) and the coherent amplitude |gamma| of the complex coherence as input data of subsequent bulk scattering dominant complex coherence screening, confidence map construction and multi-channel feature map construction.
  3. 3. The physically guided spatial deep learning forest canopy height inversion method of claim 1, wherein calculating vertical wavenumbers comprises: Reading incident angle, inclined distance, wavelength, vertical baseline and data acquisition mode information corresponding to the PolInSAR data; calculating vertical wave number kz according to the interference imaging geometrical relationship; extracting the amplitude |kz| of the vertical wave number kz and a sign term thereof, and representing the geometrical sensitivity of the canopy height inversion; And taking the vertical wave number kz, the amplitude |kz|andsign items thereof as data input for the construction of the subsequent multi-channel characteristic diagram and the construction of the confidence diagram.
  4. 4. The physically guided spatial depth learning forest canopy height inversion method of claim 1, wherein performing a three-stage inversion based on a stochastic volume scattering-surface scattering hybrid model RVoG comprises: performing coherent straight line fitting on the observed complex coherence, and solving the intersection point of the fitted straight line and a unit circle to obtain a candidate ground phase; Judging the candidate ground phase, determining a real ground phase phi 0 , and determining a bulk scattering dominant complex coherence gamma according to the real ground phase phi 0 ; Based on the bulk scattering dominant complex coherence gamma, searching and matching are carried out by combining a two-dimensional lookup table LUT, and the initial canopy height is obtained through inversion ; The ground phase phi 0 , the bulk scattering dominant complex coherence gamma and the initial canopy height Input data constructed as a subsequent multi-channel feature map, wherein the initial canopy height As a physical anchor point for constraining the subsequent spatial deep learning inversion process.
  5. 5. The method of physically guided spatial deep learning forest canopy height inversion of claim 1, wherein constructing a multi-channel signature comprising a PolInSAR signature channel and a physical anchor channel comprises: The real part, the imaginary part and the coherent amplitude of the bulk scattering dominant complex coherence gamma are subjected to ground phase phi 0 , vertical wave number kz and normalized physical anchor point channel Splicing to form a multi-channel characteristic diagram; sliding cutting is carried out on the multi-channel feature map according to the preset patch size and the sliding step length, and an input patch for training and reasoning is obtained; synchronously cutting a reference height label formed by LiDAR-RH100 aligned with the input patch space to form a label patch required by supervision training; the input patch and the label patch are input together into the training process of the subsequent space convolution neural network.
  6. 6. The method for spatially guided spatial deep learning forest canopy height inversion of claim 1, wherein the spatial convolutional neural network is a spatial U-Net of an encoder-decoder structure, and wherein inputting the feature patch into the spatial convolutional neural network for end-to-end prediction comprises: performing multi-scale spatial feature extraction on the feature patches through a downsampling coder; performing spatial resolution restoration on the multi-scale spatial features by an upsampling decoder; The shallow detail information and the deep semantic information are fused through jump connection; The canopy height prediction logits, which is co-scaled with the input patch, is output as input to the subsequent physical range mapping and loss function calculation.
  7. 7. The physically guided spatial deep learning forest canopy height inversion method according to claim 1, wherein Sigmoid-Scaling mapping is performed on model outputs, expressed in particular as: ; In the formula, Representing the canopy height predicted value after the Sigmoid-Scaling mapping; Representing a preset lower limit of the canopy height; Representing a preset upper limit of the canopy height; representing a Sigmoid function, and z represents logits of the spatial convolutional neural network output.
  8. 8. The physically guided spatial deep learning forest canopy height inversion method of claim 1, wherein constructing the confidence map based on the coherence information comprises: Generating a confidence map conf based on the coherence magnitude |gamma| and the geometric sensitivity |kz|, such that regions of higher coherence quality and larger |kz| have higher confidence; performing weight modulation on the supervision items and the regular items in the training loss based on the confidence map conf; And taking the confidence map conf as weight input in the calculation of the joint loss function so as to realize the training targets of retaining details in a high confidence region and enhancing stability in a low confidence region.
  9. 9. The method for inversion of the spatial deep learning forest canopy height of physical guidance according to claim 1, wherein the joint loss function comprises a confidence modulation supervision loss term, a low confidence region total variation smoothing regularization term, a low frequency consistency constraint term and a weighted summation of the three, specifically: the confidence modulation supervision loss term adopts the following expression: ; Wherein, the Representing a confidence modulation supervision loss term; representing a set of active supervisor pixels within patch p The number of pixels; A pixel set which indicates that the label in the patch p is valid and SAR is valid; Representing the confidence weight derived from conf at pel q, huber (·) representing the Huber loss function; representing the network prediction height at pixel q; Representing the reference height at picture element q; The low confidence region total variation smoothing regularization term adopts the following expression: ; Wherein, the A low confidence region total variation smoothing regularization term is represented, q represents pels, W (q) represents confidence weights at pels q; representing the network prediction height at pixel q; Representing an x-direction gradient operator; representing a y-direction gradient operator; the low frequency consistency constraint term adopts the following expression: ; Wherein, the LPF (&) represents a low-pass operator; representing the predicted result of the same input patch after the first random enhancement; Representing the predicted result of the same input patch after the second random enhancement; represents an L1 norm; the joint loss function uses the following expression: ; wherein L represents a joint loss function; representing a confidence modulation supervision loss term; a low confidence region total variation smoothing regularization term; representing low frequency consistency constraints; Representing a weight coefficient corresponding to the total variation smoothing regular term of the low confidence region; and (5) representing the weight coefficient corresponding to the low-frequency consistency constraint term.
  10. 10. The method for inversion of the canopy height of a physically guided spatial deep learning forest according to claim 1, wherein seamless reasoning is performed by fusing a network of exponential moving average EMA parameter versions with a Hann weighted sliding window, and specifically comprising: Maintaining an exponential moving average EMA version of the spatial convolution neural network parameters in the training process, and calling a network of the EMA parameter version in an reasoning stage to predict an overlapped sliding window patch; Applying a Hann weight window to the prediction result of each sliding window patch, and executing weighted accumulation and normalization fusion on the overlapping area; The whole graph prediction result after fusion is obtained by adopting the following expression: ; Wherein, the R represents the spatial position in the whole graph, i represents the sliding window number; Representing the predicted value of the ith sliding window at position r; Representing the Hann weight corresponding to the ith sliding window at position r; and outputting a forest canopy height product of the continuous whole graph to inhibit window boundary splicing artifacts and improve spatial consistency.

Description

Physical-guided space deep learning forest canopy height inversion method Technical Field The invention relates to the technical field of forest remote sensing inversion, in particular to a physical-guided space depth learning forest canopy height inversion method. Background The forest canopy height is an important parameter reflecting the vertical structure of the forest, and is closely related to forest biomass estimation, carbon reserve estimation, forest health monitoring, management and the like. Compared with optical remote sensing which is easily affected by cloud and saturation, the synthetic aperture radar has all-day and all-weather imaging capability, wherein the polarization interference synthetic aperture radar (PolInSAR) combines polarization information with interference information, so that the method can not only characterize scattering mechanism difference, but also reflect height information of a scatterer, and has larger application potential in regional scale forest height inversion. In the existing PolInSAR forest height inversion, a random volume scattering-surface scattering mixed model (RVoG) and a three-stage inversion method thereof are widely applied. The method relies on fitting of complex coherent distribution rules and coherent separation of ground phases/bodies, and further solves forest heights through a lookup table and other modes. However, in practical application, forest scattering conditions are complex and have strong spatial heterogeneity, observation complex coherence is often influenced by various factors, such as coherence quality changes along with earth surface coverage, water content and baseline setting, partial areas can also have conditions of decoherence enhancement, insufficient bulk scattering leading assumption and the like, so that systematic deviation is easy to generate in some scenes due to pure physical inversion, and stripe-shaped or particle-shaped artifacts are accompanied, and meanwhile, when vertical wave numbers are used, the method is suitable for the observation complex coherenceWhen the geometric sensitivity is small or insufficient, the response of the model to the height change is weakened, and the inversion result is unstable or is easy to generate high saturation and error amplification. On the other hand, in recent years, the supervised regression method based on deep learning can learn the statistical relationship between the features and the height by using the sample data, and can improve the local error of the traditional model under certain conditions. However, the method has strong dependence on training data distribution, and when scene type, imaging geometry or coherence quality are changed, the model is easy to be generalized and reduced, and when direct pixel-by-pixel regression or physical constraint is absent, the prediction result can generate problems of block jump, discontinuous texture or high-value segment compression in a low-sensitivity and low-coherence region, and the like, so that the drawing quality and usability of the whole image are affected. Especially in applications where it is desirable to output continuous crown height products, splice reasoning, noise propagation and boundary effects may further amplify the spatial inconsistencies. Therefore, a forest height inversion method which combines physical interpretability and data driving advantages is needed, namely, on one hand, reliable priori anchoring is provided by utilizing a physical model inversion result, physical rationality of network output is restrained, on the other hand, a spatial modeling and confidence level modulation mechanism is introduced, artifacts are restrained under the condition of uneven coherent quality, spatial continuity is improved, and therefore a forest canopy height drawing result with higher precision and more stable whole graph is obtained. Disclosure of Invention The embodiment of the invention aims to provide a physical-guided spatial deep learning forest canopy height inversion method, which at least solves the technical problems that an existing PolInSAR/RVoG physical model is unstable in inversion under the conditions of complex scattering and uneven coherent quality, stripe/block artifacts are easy to generate, and the depth learning method lacks physical constraint to cause insufficient generalization, discontinuous whole graph and the like. In order to achieve the above object, the present invention provides a method for inversion of a spatially deep learning forest canopy height by physical guidance, the method comprising: Acquiring multi-baseline full-polarization interference synthetic aperture radar PolInSAR data, estimating complex coherence observance based on the PolInSAR data, and calculating a vertical wave number; Based on the complex coherence observables and the vertical wave numbers, performing three-stage inversion based on a random bulk scattering-surface scattering hybrid model RVoG to separately phas