Search

CN-122024964-A - Multimode fusion fatigue life prediction method based on 3D convolutional neural network

CN122024964ACN 122024964 ACN122024964 ACN 122024964ACN-122024964-A

Abstract

The invention relates to a multi-mode fusion fatigue life prediction method based on a 3D convolutional neural network, which comprises the following steps of obtaining a fatigue data set of equipment and setting a region of interest, constructing a multi-mode fusion network based on a 3D ResNet-18 network and a deep neural network, training the multi-mode fusion network by using the fatigue data set to obtain a metal fatigue life prediction model, wherein the 3D ResNet-18 network comprises 1 layer of initial convolution, 4 times of downsampling and 4 groups of residual stacks, the fatigue data set is input into the 3D ResNet-18 network, the 3D ResNet-18 network outputs multi-scale defect characteristics, the multi-scale defect characteristics and the fatigue data set are taken as input of the deep neural network together, and a fatigue life actual prediction result is obtained based on the metal fatigue life prediction model. Compared with the prior art, the method has the advantages of overcoming the defects of strong dependence on artificial feature extraction and parameter selection, insufficient generalization and the like of the traditional one-dimensional modal data modeling method.

Inventors

  • WANG HAIJIE
  • Medina Jurat
  • LI BO
  • XUAN FUZHEN

Assignees

  • 华东理工大学

Dates

Publication Date
20260512
Application Date
20260130

Claims (10)

  1. 1. The multimode fusion fatigue life prediction method based on the 3D convolutional neural network is characterized by comprising the following steps of: acquiring a fatigue data set of the equipment and setting a region of interest; constructing a multi-mode fusion network based on a 3D ResNet-18 network and a deep neural network, training the multi-mode fusion network by adopting a fatigue data set to obtain a metal fatigue life prediction model, wherein the 3D ResNet-18 network comprises 1 layer of initial convolution, 4 times of downsampling and 4 groups of residual error stacking, a fatigue data set is input into the 3D ResNet-18 network, the 3D ResNet-18 network outputs multi-scale defect characteristics, and the multi-scale defect characteristics and the fatigue data set are taken as the input of the deep neural network together; And obtaining an actual fatigue life prediction result based on the metal fatigue life prediction model.
  2. 2. The method of claim 1, wherein the 3D ResNet-18 network comprises a layer 1 initial convolution, 4 downsampling, and 4 sets of residual stacks.
  3. 3. The multi-mode fusion fatigue life prediction method based on the 3D convolutional neural network according to claim 1, wherein after the fatigue data set is input into an initial convolution and downsampling of a 3D ResNet-18 network, an interesting region in an image in the fatigue data set is subjected to initial convolution processing, the image resolution is reduced to 112×112 pixels, the number of characteristic channels is increased to 64, the depth is kept to 270, and then the image subjected to the initial convolution processing is subjected to downsampling for 4 times, so that shallow-layer generalization features are obtained.
  4. 4. A multi-modal fusion fatigue life prediction method based on a 3D convolutional neural network as claimed in claim 3, wherein shallow generalization features are used as inputs of residuals, and the convolutional kernel depths of 4 sets of residuals are different.
  5. 5. The 3D convolutional neural network-based multimode fusion fatigue life prediction method of claim 4, wherein the 3D ResNet-18 network further comprises a network output part, the output of the residual is used as the input of the network output part, and the multiscale defect characteristic is obtained through an average pooling layer and a full connection layer of the network output part.
  6. 6. The method for predicting the fatigue life of a multi-modal fusion based on a 3D convolutional neural network of claim 5, wherein the multi-scale defect features are one-dimensional feature vectors of length 128.
  7. 7. The method for predicting the fatigue life based on the multi-modal fusion of the 3D convolutional neural network according to claim 1, wherein the deep neural network comprises 1 input layer, 7 hidden layers and 1 output layer.
  8. 8. The method for predicting the fatigue life based on multi-modal fusion of a 3D convolutional neural network as recited in claim 7, wherein each hidden layer of the deep neural network comprises 64 neurons.
  9. 9. The 3D convolutional neural network-based multi-modal fusion fatigue life prediction method of claim 8, wherein the deep neural network outputs a fatigue life prediction value y pred , and the loss function of the deep neural network minimizes the deviation of the fatigue life prediction value y pred from the true value y test .
  10. 10. The method for predicting the fatigue life based on the multi-modal fusion of the 3D convolutional neural network according to claim 1, wherein the fatigue data set comprises process parameters, mechanical properties and load conditions.

Description

Multimode fusion fatigue life prediction method based on 3D convolutional neural network Technical Field The invention relates to the technical field of fatigue life prediction, in particular to a multimode fusion fatigue life prediction method based on a 3D convolutional neural network. Background With the continuous development of high-end equipment towards precision and complexity, the demands of the service process for high reliability and safety are continuously improved. The fatigue properties of structural materials under complex loads directly affect the reliability of the equipment components. Additive manufacturing has become a leading edge technology for pushing aerospace equipment to develop in a crossing manner due to the remarkable advantages of the additive manufacturing in the aspects of high freedom degree of complex structural design, high material utilization rate and low manufacturing cost. However, the additive manufacturing process inevitably creates inherent defects of holes, unfused, microcracks, etc., which tend to be localized sources of stress concentration, promoting easier initiation and accelerated propagation of fatigue cracks, thereby compromising the structural integrity and service reliability of the additive manufactured component. The scientific evaluation and the accurate prediction of the fatigue strength and the service life of the additive component are critical to guaranteeing the service safety and the service life reliability of the equipment component. Although life prediction methods based on semi-empirical models and fracture mechanics have been successful in certain situations, their poor integration and low mobility limitations make it challenging to build a generic mathematical model that can uniformly describe the complex structure and performance relationships of materials. With the improvement of computing power and the development of artificial intelligence technology, a data-driven fatigue life prediction method is widely applied. However, the existing data-driven fatigue life prediction method relies on single-mode data, and a machine learning fatigue life prediction model of the material is constructed by extracting key parameters describing the size, position, shape and distribution of defects and combining one-dimensional mode data such as load working conditions, mechanical properties and the like. However, these methods rely on explicit characterization of limited defects alone, and often have difficulty adequately capturing the combined effects of defects under spatial distribution, morphological complexity, and multi-scale interactions. Disclosure of Invention The invention aims to realize collaborative modeling of cross-modal characteristics and end-to-end prediction of fatigue life, and solves the problems of strong dependence on artificial characteristic extraction and parameter selection and insufficient generalization of the traditional one-dimensional modal data modeling method. The aim of the invention can be achieved by the following technical scheme: a multimode fusion fatigue life prediction method based on a 3D convolutional neural network comprises the following steps: acquiring a fatigue data set of the equipment and setting a region of interest; constructing a multi-mode fusion network based on a 3D ResNet-18 network and a deep neural network, training the multi-mode fusion network by adopting a fatigue data set to obtain a metal fatigue life prediction model, wherein the 3D ResNet-18 network comprises 1 layer of initial convolution, 4 times of downsampling and 4 groups of residual error stacking, a fatigue data set is input into the 3D ResNet-18 network, the 3D ResNet-18 network outputs multi-scale defect characteristics, and the multi-scale defect characteristics and the fatigue data set are taken as the input of the deep neural network together; And obtaining an actual fatigue life prediction result based on the metal fatigue life prediction model. Further, the 3D ResNet-18 network includes a layer 1 initial convolution, 4 downsampling, and 4 sets of residual stacks. Further, after the fatigue data set is input into the initial convolution and downsampling of the 3D ResNet-18 network, the region of interest in the image in the fatigue data set is firstly subjected to initial convolution processing, the resolution of the image is reduced to 112×112 pixels, the number of characteristic channels is increased to 64, the depth is kept to 270, and then the image after the initial convolution processing is subjected to downsampling for 4 times, so that shallow generalization characteristics are obtained. Further, shallow generalization features are used as inputs of residuals, and the convolution kernel depths of 4 sets of residuals are different. Further, the 3D ResNet-18 network also comprises a network output part, the output of the residual error is used as the input of the network output part, and the multi-scale defect characteris