Search

CN-122020559-A - Industrial multi-mode data edge cloud collaborative fusion processing method and related equipment

CN122020559ACN 122020559 ACN122020559 ACN 122020559ACN-122020559-A

Abstract

The application provides an industrial multi-mode data edge cloud collaborative fusion processing method and related equipment, and relates to the technical field of industrial Internet and edge computing. The method comprises the steps of obtaining multi-mode data of industrial equipment in a current stage at an edge node, extracting characteristics of the multi-mode data to obtain multi-mode characteristics, predicting equipment states of the multi-mode characteristics through a lightweight fusion model to obtain equipment state prediction results corresponding to the industrial equipment, wherein model parameters of the lightweight fusion model are obtained by carrying out model optimization training on the basis of the equipment state prediction results and the multi-mode characteristics of the industrial equipment in the last stage by a cloud, sending the equipment state prediction results and the multi-mode characteristics corresponding to the current stage to the cloud to load model optimization parameters issued by the cloud, and updating the model parameters of the lightweight fusion model to realize collaborative closed loop of low-delay edge intelligent decision and high-precision cloud continuous optimization of the industrial multi-mode data.

Inventors

  • WANG SUNJUN
  • Huang Zhangkai
  • SHEN WENCHEN
  • MA GANG
  • SU QINGCHEN
  • LI RONG
  • CHEN LIANG
  • YANG ZHI

Assignees

  • 杭州义益钛迪信息技术有限公司

Dates

Publication Date
20260512
Application Date
20260206

Claims (10)

  1. 1. The industrial multi-mode data edge cloud collaborative fusion processing method is characterized by being applied to edge nodes corresponding to industrial equipment and comprising the following steps of: Acquiring multi-mode data of the industrial equipment at the current stage; Extracting the characteristics of the multi-mode data to obtain multi-mode characteristics; The multi-mode characteristics are subjected to equipment state prediction processing through a lightweight fusion model, so that equipment state prediction results of the industrial equipment corresponding to the multi-mode data are obtained, and model parameters of the lightweight fusion model are obtained by carrying out model optimization training on the basis of equipment state prediction results and the multi-mode characteristics of the industrial equipment in the last stage by a cloud; And sending a device state prediction result and multi-mode characteristics corresponding to the current stage to the cloud to load model optimization parameters issued by the cloud, and updating model parameters of the lightweight fusion model based on the model optimization parameters.
  2. 2. The method of claim 1, wherein the multi-modal data includes structured data and unstructured data, and wherein the feature extraction of the multi-modal data to obtain multi-modal features comprises: Performing data cleaning treatment on the structured data to obtain cleaned structured data, wherein the data cleaning treatment comprises abnormal elimination, linear interpolation complementation and standardization; extracting time-frequency domain characteristics of the cleaned structured data to obtain structured characteristics; Performing feature extraction on the unstructured data through a feature extraction strategy matched with the data type of the unstructured data to obtain unstructured features, wherein the data type comprises a video stream, an audio signal and an infrared image, and the corresponding feature extraction strategy comprises a frame screening and resolution compression strategy, an audio feature parameter extraction strategy and an image enhancement strategy; and obtaining the multi-modal characteristics corresponding to the multi-modal data based on the structural characteristics and the unstructured characteristics.
  3. 3. The method according to claim 1 or 2, wherein the predicting the device state of the multi-mode feature by using the lightweight fusion model to obtain a device state prediction result of the industrial device corresponding to the multi-mode data includes: carrying out parallel feature conversion on the multi-mode features through a lightweight fusion model to obtain multi-mode fault semantic features; weighting and fusing the multi-mode fault semantic features based on the fusion weights to obtain fusion features; and carrying out fault prediction based on the fusion characteristics to obtain a device state prediction result of the industrial device corresponding to the multi-mode data, wherein the device state prediction result comprises a device fault probability value and a confidence coefficient of the device fault probability value.
  4. 4. The industrial multi-mode data edge cloud collaborative fusion processing method is characterized by being applied to cloud ends corresponding to industrial equipment and comprising the following steps of: Receiving multi-mode characteristics and equipment state prediction results of the industrial equipment at the current stage, which are transmitted by an edge node of the industrial equipment, wherein the equipment state prediction results are obtained by the edge node performing equipment state prediction processing on the multi-mode characteristics through a lightweight fusion model; constructing and obtaining a combined training set based on the multi-mode characteristics and the equipment state prediction result; based on the combined training set, carrying out model optimization training on a preset multi-mode network model to obtain a loss function value and a model output result; Performing back propagation gradient dynamic adjustment on model parameters of the preset multi-mode network model based on the loss function value and the model output result to obtain model optimization parameters; And when the model optimization parameters meet convergence conditions, issuing the model optimization parameters to the edge nodes, wherein the model optimization parameters are used for updating the model parameters of the lightweight fusion model, and the updated lightweight fusion model is used for predicting the equipment state of the industrial equipment in the next stage.
  5. 5. The method according to claim 4, wherein model parameters of the preset multi-modal network model are training weights, and the model optimization parameters are optimized weights; based on the combined training set, performing model optimization training on a preset multi-mode network model to obtain a loss function value and a model output result, wherein the method comprises the following steps: inputting the multi-modal characteristics in the combined training set into a preset multi-modal network model to obtain multi-path characteristics; Obtaining a loss function based on the prediction result in the joint training set; and carrying out fusion processing on the multipath features based on the training weights in each iteration of model optimization training to obtain training fusion features, and taking the training fusion features as a model output result, wherein the training weights are fusion weights of the lightweight fusion model or optimized weights corresponding to the previous iteration.
  6. 6. The method of claim 5, wherein the multi-modal features comprise structured features and unstructured features, the unstructured features comprising visual features and audio features; inputting the multi-modal features in the joint training set into a preset multi-modal network model to obtain multi-path features, wherein the method comprises the following steps: inputting the structural features into structural branches of the preset multi-mode network model to obtain first path features; inputting the visual features into a visual branch of the preset multi-mode network model to obtain a second path of features; inputting the audio features into an audio branch of the preset multi-mode network model to obtain a third feature; and splicing the first path of characteristics, the second path of characteristics and the third path of characteristics to obtain multiple paths of characteristics.
  7. 7. The method according to any one of claims 4 to 6, wherein said issuing the model optimization parameters to the edge node when the model optimization parameters meet a convergence condition comprises: When the iteration number of the model optimization training is equal to the frequency threshold, acquiring a verification set corresponding to the combined training set; Optimizing the preset multi-mode network model based on the model optimization parameters corresponding to the current iteration to obtain an optimized network model; verifying the verification set through the optimized network model to obtain the accuracy of the verification set; acquiring the lifting amplitude between the accuracy and the accuracy corresponding to the previous iteration; and when the lifting amplitude is greater than or equal to an amplitude threshold, issuing the model optimization parameters to the edge nodes so as to update the lightweight fusion model.
  8. 8. An edge node, the edge node comprising: The edge perception module is used for acquiring multi-mode data of the industrial equipment at the current stage; The edge feature processing module is used for carrying out feature extraction on the multi-mode data to obtain multi-mode features; The edge reasoning module is used for carrying out prediction processing on the multi-mode characteristics through a lightweight fusion model to obtain a device state prediction result of the industrial device corresponding to the multi-mode data, and model parameters of the lightweight fusion model are obtained by carrying out model optimization training on the basis of the device state prediction result and the multi-mode characteristics of the industrial device in the last stage; the edge communication module is used for sending the device state prediction result and the multi-mode characteristic corresponding to the current stage to the cloud to load the model optimization parameters issued by the cloud; And the edge updating module is used for updating the model parameters of the lightweight fusion model based on the model optimization parameters.
  9. 9. A cloud end, characterized in that the cloud end comprises: The cloud communication module is used for receiving multi-mode characteristics and equipment state prediction results of the industrial equipment at the current stage, which are transmitted by an edge node of the industrial equipment, wherein the equipment state prediction results are obtained by the edge node performing equipment state prediction processing on the multi-mode characteristics through a lightweight fusion model; The cloud data module is used for constructing and obtaining a combined training set based on the multi-mode characteristics and the equipment state prediction result; the cloud training module is used for carrying out model optimization training on a preset multi-mode network model based on the combined training set to obtain a loss function value and a model output result; the cloud optimization module is used for carrying out back propagation gradient dynamic adjustment on model parameters of the preset multi-mode network model based on the loss function value and the model output result to obtain model optimization parameters; The cloud communication module is further configured to send the model optimization parameters to the edge node when the model optimization parameters meet a convergence condition, wherein the model optimization parameters are used for updating model parameters of the lightweight fusion model, and the updated lightweight fusion model is used for predicting a device state of the industrial device in a next stage.
  10. 10. An industrial multi-mode data edge cloud collaborative fusion processing system, which comprises the edge node as claimed in claim 8 and the cloud as claimed in claim 9.

Description

Industrial multi-mode data edge cloud collaborative fusion processing method and related equipment Technical Field The application relates to the technical field of industrial Internet and edge computing, in particular to an industrial multi-mode data edge cloud collaborative fusion processing method and related equipment. Background Along with the continuous evolution of intelligent manufacturing to the depth perception and intelligent decision direction, industrial sites increasingly generate massive multi-modal data, and the multi-modal data covers structured data and unstructured data. The multi-source heterogeneous data together form a key information base for equipment state monitoring, fault diagnosis, production quality control and process flow optimization. The main multi-mode data processing scheme in the current industry is mainly a cloud centralized processing mode or a simple edge cloud separation architecture. In the cloud centralized processing mode, the edge end only bears the functions of original data acquisition and uploading, and lacks effective preprocessing and feature extraction capability, so that a large amount of redundant original data is uploaded to the cloud, the network bandwidth burden is increased, and the system response delay is remarkably increased. In order to solve the problem, in an edge cloud separation architecture, only basic data format conversion or filtering operation is performed at the edge end, multi-mode fusion still relies on a cloud to simply splice all mode data, dynamic semantic association among cross modes cannot be fully modeled, fusion precision improvement is limited, edge computing resources are not utilized efficiently, and dynamic adaptation and optimization of a cloud model are difficult to perform according to actual edge working conditions. As can be seen, the multi-mode data processing schemes in the mainstream in the existing industry still have significant shortcomings in terms of processing timeliness, fusion precision, resource utilization efficiency, model self-adaptation capability, system robustness and the like, and are difficult to meet the comprehensive requirements of intelligent manufacturing on instantaneity, accuracy and reliability. Therefore, how to realize the edge cloud collaborative fusion processing scheme of the industrial multi-mode data with low delay, high precision and strong adaptability is a problem to be solved. Disclosure of Invention The embodiment of the application provides an industrial multi-mode data edge cloud collaborative fusion processing method and related equipment, which are used for realizing collaborative closed loop of low-delay edge intelligent decision and high-precision cloud continuous optimization of industrial multi-mode data, and improving the accuracy and overall robustness of equipment state monitoring through dynamic model updating and self-adaptive fusion while ensuring instantaneity. In a first aspect, an embodiment of the present application provides an industrial multi-mode data edge cloud collaborative fusion processing method, which is applied to an edge node corresponding to industrial equipment, and includes: acquiring multi-mode data of industrial equipment at the current stage; Extracting features of the multi-modal data to obtain multi-modal features; The method comprises the steps that device state prediction processing is conducted on multi-mode features through a lightweight fusion model, device state prediction results of industrial devices corresponding to multi-mode data are obtained, and model parameters of the lightweight fusion model are obtained by model optimization training conducted on the basis of device state prediction results and multi-mode features of the industrial devices in the last stage by a cloud; And sending the device state prediction result and the multi-mode characteristic corresponding to the current stage to the cloud to load the model optimization parameters issued by the cloud, and updating the model parameters of the lightweight fusion model based on the model optimization parameters. In one possible implementation, the multi-modal data includes structured data and unstructured data, and the feature extraction is performed on the multi-modal data to obtain multi-modal features, including: Carrying out data cleaning treatment on the structured data to obtain cleaned structured data, wherein the data cleaning treatment comprises abnormal elimination, linear interpolation complementation and standardization; extracting time-frequency domain characteristics of the cleaned structured data to obtain structured characteristics; performing feature extraction on unstructured data through a feature extraction strategy matched with the data type of the unstructured data to obtain unstructured features, wherein the data type comprises a video stream, an audio signal and an infrared image, and the corresponding feature extraction strategy comprises a frame scr