Search

CN-121982395-A - Mars hyperspectral image classification method based on CNN and GCN

CN121982395ACN 121982395 ACN121982395 ACN 121982395ACN-121982395-A

Abstract

The invention relates to a Mars hyperspectral image classification method based on CNN and GCN, and belongs to the technical field of remote sensing image classification. The method comprises the steps of constructing a double-branch hybrid neural network comprising CNN branches and GCN branches, wherein the CNN branches are of a lightweight network structure and comprise a denoising dimension reduction module, a depth separable convolution module and a multi-scale convolution module, the GCN branches are composed of HGNNConv networks, a Mars hyperspectral image to be classified is input into the double-branch hybrid neural network, pixel-level features and super-pixel-level features of the Mars hyperspectral image are respectively extracted through the CNN branches and the GCN branches, a channel attention mechanism and a spatial attention mechanism are introduced, weight distribution and fusion are carried out on the pixel-level features and the super-pixel-level features, and comprehensive features are obtained to finish classification of the Mars hyperspectral image based on the comprehensive features. The method aims to solve the technical problems that the characteristics of the pixel level and the super pixel level cannot be fused efficiently and distortion is easy to occur in the characteristic propagation process in the prior art.

Inventors

  • TIAN ANHONG
  • CHEN TAO
  • FU CHENGBIAO
  • JIN HUAIPING
  • FAN YIBO

Assignees

  • 昆明理工大学

Dates

Publication Date
20260505
Application Date
20260122

Claims (9)

  1. 1. A method for classifying Mars hyperspectral images based on CNN and GCN, which is characterized by comprising the following steps: Step1, constructing a double-branch hybrid neural network comprising a CNN branch and a GCN branch, wherein the CNN branch is of a lightweight network structure and comprises a denoising dimension reduction module, a depth separable convolution module and a multi-scale convolution module, the GCN branch is formed by a HGNNConv network, and the HGNNConv network comprises a super-pixel generation and pixel-to-super-pixel mapping construction operation, a super-edge generation operation, a super-edge fusion operation and a ContraNorm normalization operation; step2, inputting the Mars hyperspectral image to be classified into the double-branch hybrid neural network, extracting pixel-level features of the Mars hyperspectral image through the CNN branch, and extracting super-pixel-level features of the Mars hyperspectral image through the GCN branch; and Step3, introducing a channel attention mechanism and a spatial attention mechanism, and carrying out weight distribution and fusion on the pixel-level features and the super-pixel-level features to obtain comprehensive features so as to complete classification of the Mars hyperspectral image based on the comprehensive features.
  2. 2. The method for classifying Mars hyperspectral images based on CNN and GCN according to claim 1, wherein the CNN branches are specifically: The denoising and dimension reducing module adopts two 1 multiplied by 1 convolution layer architectures, and performs activation through BatchNorm and LeakyReLU to synchronously realize data denoising and feature dimension reduction; Depth separable convolution comprising a depth convolution and a point convolution, wherein the depth convolution employs With a size of A single-channel convolution kernel for independently carrying out space convolution on each spectrum channel of the input Mars hyperspectral image characteristics, then splicing all space convolution results to output a spliced characteristic diagram, wherein the point convolution is used Personal (S) The convolution kernels of the (a) merge the features of different channels; The multi-scale convolution module is composed of two independent multi-scale convolution layers, each layer comprises three convolution blocks with different receptive fields, and after each multi-scale convolution layer, features extracted by different kernel sizes are integrated through a splicing operation to obtain a cross-scale feature fusion feature, wherein the different kernel sizes comprise three kernel sizes of 3×3, 5×5 and 7×7, and the cross-scale feature fusion feature is designed in parallel.
  3. 3. The method for classifying Mars hyperspectral images based on CNN and GCN according to claim 1, wherein the operations of super-pixel generation and pixel-to-super-pixel mapping construction are as follows: dividing the Mars hyperspectral image by adopting super-pixel division to generate super-pixel areas, wherein each super-pixel is used as a super-graph node; Defining mapping matrix to represent membership of pixels and super pixels to obtain matrix element representation The first pixel The association of the individual superpixels is: ; Wherein, the Represent the first Individual pixel elements And the first Each super pixel element The relationship between the two, Representing spatially to be The surface of the substrate is flattened and, Representation of The first of (3) A pixel; the conversion between pixels and super-pixel nodes is realized through an encoder and a decoder, and the expression is as follows: ; ; Wherein, the Representing normalized columns Is to be used in the present invention, Representing the feature matrix of all super-pixel nodes, Representing the relationship of the pixel to the superpixel, Representing the spatial dimensions of the restored flat data, Representing the conversion of a pixel map into a graph node, Representing a super pixel node to pixel mapping.
  4. 4. The classification method of Mars hyperspectral image based on CNN and GCN according to claim 1, wherein the operation of generating the superedge is based on a super-pixel undirected graph structure, and three parallel modes are adopted to generate the superedge, specifically: The pair supersides are obtained by directly converting the original undirected sides into binary supersides and reserving the low-order topological relation of the local space neighborhood; k-hop superfby taking a central node as a starting point, connecting all nodes in a k-hop neighborhood of the central node, and capturing multi-order remote space context information; kNN superside, namely selecting a node with the largest structural similarity in a space neighborhood to construct the superside based on SSIM as similarity measurement, wherein the SSIM is defined as: ; In the formula, A comparison of the brightness is indicated and, A contrast ratio comparison is indicated and is shown, A comparison of the structures is shown and, , , Representing structural parameters, the SSIM is linearly adjusted to [ -1,1], ensuring compatibility with spatial weights.
  5. 5. The classification method of Mars hyperspectral images based on CNN and GCN according to claim 1, wherein the super-edge fusion operation is specifically: carrying out weight fusion on the supersides, and calculating spectrum similarity among nodes The expression is: ; In the formula, In order to be an L2 norm, And Representing nodes respectively And Is characterized by the spectral features of (a), An adjustable parameter in a feature similarity measurement function; Calculating the spatial similarity between nodes The expression is: ; In the formula, And Respectively represent super pixel nodes And Is provided with a central portion of the lens, An adjustable parameter in the spatial distance measurement function; designing spectral measurement weights for supersides And spatial measurement weights The expression is: ; ; Wherein, the And is also provided with , The number of nodes representing the super-edge, Representing slave A number of combinations of two elements selected from the plurality of different elements; Calculating the final weight of the superside, realizing dual constraint of spectrum and space, and the expression is as follows: ; In the formula, Is the final weight of the superside.
  6. 6. The method for classifying Mars hyperspectral images based on CNN and GCN according to claim 1, wherein the ContraNorm normalization operation is specifically as follows: Full connection diagram structure Each node in (a) The representation vector corresponding to one sample is used for obtaining the global loss function on the node representation of the graph The expression is: ; Wherein, the For a set of nodes, Is a temperature parameter; Representation of node fetch Is set to the derivative of the characteristic matrix First, the Row of lines The matrix form is obtained as: ; Wherein, the And ; The feature size is maintained using layer normalization, and the updated form of LayerNorm is: ; Wherein, the And As a learnable parameter for rescaling a representation , Is a constant value, and the constant value is a constant value, In order to take the mean value of the values, For variance; LayerNorm is added to take a single step gradient descent Updating to obtain: ; Wherein, the And Representing pre-update and post-update representations respectively, Is the step size of the gradient descent.
  7. 7. The classification method of Mars hyperspectral image based on CNN and GCN according to claim 1, wherein the pixel level feature of extracting the Mars hyperspectral image through the CNN branch is specifically: At each of the In the convolution block, firstly, adopting a batch normalization layer to adjust data distribution, introducing a1 multiplied by 1 convolution layer to perform characteristic dimension transformation after the normalization layer, then using LeakyReLU activation function to introduce nonlinearity, inserting an average pooling layer, and after average pooling, utilizing The two-dimensional CNN of the kernel size extracts local spatial features, and the expression is: ; Wherein, the Represent the first Layer in spatial position Is provided with an output characteristic of (a), The weight value of the core is represented, Representing spatial position The first after the average pooling operation The layer is input with the characteristics of the layer, In order for the offset to be a function of, To activate a function 。
  8. 8. The classification method of Mars hyperspectral image based on CNN and GCN according to claim 1, wherein the super-pixel level feature of extracting the Mars hyperspectral image through the GCN branch is specifically: Defining a hypergraph Laplacian The expression is: ; Wherein, the Is a matrix of units which is a matrix of units, In the form of a node degree matrix, In the form of a super-edge matrix, As a matrix of features, Is a super-edge weight matrix; Defining hypergraph adjacency matrix The expression is: ; In the formula, Representing supersides Connected vertices Inter-vertex neighbor set Representing supersides All nodes connected; the HGNNConv network performs characteristic propagation specifically as follows: Vertex aggregate superb, th Layer superside feature The method comprises the following steps: ; Wherein, the The representation is as the first The super-pixel vertex characteristics of the layer convolution layer inputs, Representing supersides Is a weight matrix of (2); Superedge update vertex, th Layer updated vertex features Expressed as: ; Wherein, the In order to activate the function, Is a weight matrix; Merging into a matrix form: ; The frequency domain HGNNConv is: ; and adopting a two-layer HGNNConv network to construct a hypergraph convolutional network so as to realize high-order feature interaction modeling.
  9. 9. The classification method of Mars hyperspectral image based on CNN and GCN according to claim 1, wherein Step3 specifically comprises: Step3.1 for double-branching feature The channel description vector is obtained through Max and Avg pooling and is sent to a shared MLP to calculate the channel weight The expression is: ; Wherein, the Representing Sigmoid function for normalizing weights to A range; step3.2 obtaining features in two subnets And Channel weights of (2) And Features obtained by the channel attention crossing modules respectively And The method comprises the following steps: ; ; ; Wherein, the Is composed of And A cross matrix obtained by multiplication; Step3.3 after Softmax normalization of the cross matrix, the fusion feature is characterized by Max and Avg double-path pooling splicing space, and then the weighting coefficient is obtained by the shared convolution layer The method comprises the following steps: ; Wherein, the Representing a core as Is a convolution layer of (2); Step3.4 adding the residual to the fusion module, fusing the features And Self-adaptive output through the full connection layer to obtain output The method comprises the following steps: ; Wherein, the And Respectively represent the weight and bias of the output full connection layer, As weight, parameter Within the range of [0,1 ]; Step3.5 introduction of Focal Loss The expression is: ; Wherein, the For the class weight to be a class weight, In order to be able to take the focus parameter as such, As a total number of samples, As a total number of categories, Is the first Individual samples Belonging to the first Prediction probability of class.

Description

Mars hyperspectral image classification method based on CNN and GCN Technical Field The invention relates to a Mars hyperspectral image classification method based on CNN and GCN, and belongs to the technical field of remote sensing image classification. Background Mars hyperspectral image classification (HSIC) is one of core technologies of Mars geological exploration and resource detection, and the Mars hyperspectral image classification (HSIC) is used for realizing accurate identification of targets such as Mars surface minerals and rocks by analyzing spatial-spectral joint characteristics of images. In the prior art, the traditional machine learning method such as a support vector machine, a random forest and the like relies on artificial feature extraction, is difficult to adapt to high-dimensional and complex spatial spectrum correlation characteristics of Mars HSI, and the single-branch deep learning method such as a three-dimensional convolutional neural network 3D-CNN and a graph rolling network (GCN) has the limitations that CNN branches can capture spatial spectrum characteristics of pixel levels, but the CNN branches are not fully utilized for a topological structure of super pixel levels and are easily interfered by noise and mixed pixels, GCN branches can model topological relations among super pixels, but a single branch cannot consider pixel level details, and the super-edge generation strategy is single, and the multi-dependence k-nearest neighbor algorithm excessively pays attention to spectrum similarity so as to neglect spatial topology correlation, so that characteristic characterization deviation is caused, and finally classification precision is influenced. In order to solve the defects, the prior paper proposes a Mars HSI classification model based on 3D-CNN, the characteristic expression capability is improved to a certain extent by fusing spatial spectrum characteristics through multi-scale convolution, a hypergraph structure is not introduced, topology association on a super pixel level is excavated insufficiently, robustness is limited in a complex geological scene, the prior paper "Hyperspectral Image Classification Using Graph Convolutional Networks with Superpixel Segmentation" adopts GCN to combine super pixel segmentation to realize HSI classification, a graph structure is constructed through super pixels to strengthen the spatial association, but a single-branch architecture cannot simultaneously consider the complementarity of pixel level and super pixel level characteristics, and the hyperedge is constructed only based on a spectrum Euclidean distance, so that illumination change and noise interference adaptability to Mars HSIC are poor. The improvement scheme still has obvious defects that on one hand, the characteristics of a pixel level and a super pixel level cannot be fused efficiently due to the deficiency of a double-branch network, global and local information complementation is difficult to achieve, on the other hand, the super-edge generation strategy is single, if only depending on kNN, the spatial topology and the spectral similarity cannot be fully combined, so that the physical meaning of super-graph modeling is undefined, distortion is easy to occur in the characteristic propagation process, in addition, the characteristic fusion mode lacks self-adaptability, the weight cannot be adjusted dynamically according to the spatial spectrum characteristics of Mars HSI, and the further improvement of classification performance is finally restricted. In summary, the invention is a technical scheme capable of simultaneously mining pixel-level spatial spectrum features, super-pixel-level topological features and multi-strategy construction hypergraph and self-adaptive fusion features, so as to meet urgent requirements of high-precision classification of Mars HSIC. Disclosure of Invention The invention aims to provide a Mars hyperspectral image classification method based on CNN and GCN, and aims to solve the technical problems that characteristics of pixel level and super pixel level cannot be fused efficiently and distortion is easy to occur in the characteristic transmission process in the prior art. In order to achieve the above purpose, the technical scheme of the invention is that a Mars hyperspectral image classification method based on CNN and GCN combines CNN and GCN to obtain a mixed neural network, the spectrum and the space characteristics of HSI are represented by the extraction and the fusion of super-pixel and pixel characteristics, and the characteristics of two branches are led into the channel attention and the space attention to be adaptively fused, comprising the following steps: Step1, constructing a double-branch hybrid neural network comprising a CNN branch and a GCN branch, wherein the CNN branch is of a lightweight network structure and comprises a denoising dimension reduction module, a depth separable convolution module and a multi-scale conv