Search

CN-121972765-A - Multi-mode data fusion welding robot welding seam real-time sensing method, system, electronic equipment and storage medium

CN121972765ACN 121972765 ACN121972765 ACN 121972765ACN-121972765-A

Abstract

The application provides a welding robot welding seam real-time sensing method, a welding robot welding seam real-time sensing system, an electronic device and a storage medium for multi-modal data fusion, which belong to the welding automation technology, and the method comprises the steps of synchronously collecting multiple types of sensing data in the welding process, adding uniform time stamps to the collected sensing data, and filtering to obtain initial multi-modal data; the method comprises the steps of integrating the quality parameters and the state characteristics of each mode in the same coordinate system with a welding gun as a reference, carrying out time axis alignment based on a time stamp, extracting the quality parameters and the state characteristics of each mode, dynamically generating fusion weights corresponding to each mode one by a characteristic fusion network according to the quality parameters of each mode, dynamically weighting the state characteristics of each mode by the fusion weights, generating and outputting a group of optimized characteristics at the current moment, inputting the optimized characteristics into a pre-built decision model, and calculating weld tracking parameters required by real-time control of the welding robot. The scheme of the application can more effectively resist the interference of complex working conditions and accurately sense the weld information in real time.

Inventors

  • CHEN KELE
  • LI QINGHUA
  • AI KEHUA
  • ZHANG RENJUN

Assignees

  • 四川英创力电子科技股份有限公司

Dates

Publication Date
20260505
Application Date
20260407

Claims (10)

  1. 1. The welding robot welding seam real-time sensing method based on multi-mode data fusion is characterized by comprising the following steps of: Step 1, multi-mode data acquisition and pretreatment: synchronously acquiring multiple types of sensing data in a welding process, including: three-dimensional point cloud data of a welding line area in front of a welding gun; The welding process data comprises mechanical process data and electrical process data, wherein the mechanical process data is obtained by measuring a torque sensor arranged at the tail end of a welding gun, and the electrical process data comprises a welding current signal acquired from the output end of a welding power supply and an arc voltage signal acquired from between a welding gun contact tip and a welding workpiece; Temperature field distribution data of the welding pool and the peripheral preset area range; Adding a uniform time stamp to the acquired sensing data, and respectively performing filtering pretreatment to obtain initial multi-mode data; step2, space-time registration and feature extraction Unifying the initial multi-mode data to the same coordinate system taking a welding gun as a reference, aligning time axes based on a time stamp, ensuring that the initial multi-mode data collected at the same moment corresponds to each other one by one to finish space-time registration, and then extracting quality parameters and state characteristics of each mode; Step 3, feature fusion based on dynamic weight Inputting the quality parameters and the state characteristics of each mode at the same moment into a pre-constructed characteristic fusion network, dynamically generating fusion weights corresponding to each mode one by one according to the quality parameters of each mode by the characteristic fusion network, dynamically weighting the state characteristics of each mode by using the fusion weights, and generating and outputting a group of optimized characteristics at the current moment, wherein the quality parameters are used for representing the credibility of the state characteristics of the mode to which the quality parameters belong at the current moment; Step 4, weld parameter calculation and output Inputting the optimized characteristics into a pre-constructed decision model, and calculating weld joint tracking parameters required by real-time control of the welding robot, wherein the weld joint tracking parameters comprise weld joint three-dimensional space track coordinates, weld joint groove width and depth predicted values and position posture deviation predicted values of a welding gun relative to a weld joint center line.
  2. 2. The method for real-time sensing a welding seam of a welding robot by multi-modal data fusion according to claim 1, wherein the extracting of the quality parameters and the state characteristics of each modality comprises: Performing three-dimensional point cloud geometric analysis and weld joint region feature analysis on the space-time registered three-dimensional point cloud data, and extracting point cloud quality parameters for representing the self quality of the point cloud data and initial visual state features for representing the weld joint geometric structure; Carrying out force signal time domain analysis and contact state feature analysis on the time-space registration mechanical process data, and extracting mechanical quality parameters for representing the self quality of mechanical signals and initial mechanical state features for representing the contact state of a welding gun; Carrying out electric signal time domain statistics and arc stability analysis on the space-time registered electric process data, and extracting electric quality parameters for representing the self quality of an electric signal and initial electric state characteristics for representing the state of an arc; And (3) carrying out temperature field distribution analysis and molten pool characteristic extraction on the time-space registered temperature field distribution data, and extracting thermal quality parameters for representing the self quality of the temperature field data and initial thermal state characteristics for representing the molten pool state.
  3. 3. The multi-modal data fusion welding robot weld joint real-time perception method according to claim 2, wherein the feature fusion network comprises an input layer, a feature encoder, a confidence coefficient generator, a cross-modal attention interaction layer, a Softmax normalization layer and a weighted fusion layer; The device comprises a plurality of feature encoders, a confidence coefficient generator, a plurality of detection units and a plurality of detection units, wherein each feature encoder corresponds to a state feature of a mode; The input layer is used for grouping and distinguishing quality parameters and state characteristics of all modes at the same moment of input so as to respectively send initial visual state characteristics, initial mechanical state characteristics, initial electrical state characteristics and initial thermal state characteristics into corresponding feature encoders for dimensional transformation or nonlinear mapping to respectively obtain geometric intermediate feature vectors, mechanical intermediate feature vectors, electrical intermediate feature vectors and thermal intermediate feature vectors with uniform dimensions; The cross-modal attention interaction layer is used for performing cross-modal association and self-adaptive correction on initial geometric feature confidence coefficient, mechanical feature confidence coefficient, electrical feature confidence coefficient and thermal feature confidence coefficient, and outputting corrected geometric feature confidence coefficient, mechanical feature confidence coefficient, electrical feature confidence coefficient and thermal feature confidence coefficient; The Softmax normalization layer is used for carrying out normalization processing on the corrected geometric feature confidence coefficient, the mechanical feature confidence coefficient, the electrical feature confidence coefficient and the thermal feature confidence coefficient, and outputting a group of weighting coefficients with the sum of 1 as fusion weights, wherein the higher the confidence score is, the larger the corresponding fusion weight is; The weighted fusion layer is used for multiplying the geometrical intermediate feature vector, the mechanical intermediate feature vector, the electrical intermediate feature vector and the thermal intermediate feature vector with the corresponding fusion weights element by element and summing, a set of optimization features is generated for the current time.
  4. 4. The method for sensing the welding seam of the welding robot through the multi-mode data fusion according to claim 3 is characterized in that a cross-mode attention interaction layer is used for establishing a mutual influence coefficient among modes according to the initial geometric feature confidence coefficient, the mechanical feature confidence coefficient, the electrical feature confidence coefficient and the correlation strength among thermal feature confidence coefficients, carrying out consistency check on the confidence coefficient of each mode based on the mutual influence coefficient, restraining the confidence coefficient contradicting with the global working condition trend, enhancing the confidence coefficient consistent with the global working condition trend, and outputting the corrected confidence coefficient of each mode.
  5. 5. The method for sensing the welding seam of the welding robot by multi-mode data fusion according to claim 3, wherein the point cloud quality parameters comprise at least one of groove edge definition, noise point duty ratio and effective point proportion; The mechanical quality parameters comprise at least one of a force signal-to-noise ratio, a moment drift amount, an impact abnormal value and contact stability, and the initial mechanical state characteristics comprise a welding gun normal moment, a lateral offset moment, a force fluctuation amplitude and a welding seam contact deviation; the electrical quality parameters comprise at least one of current-voltage fluctuation coefficient, short circuit frequency, arcing duty ratio and degree of deviation from a set value, and the initial electrical state characteristics comprise real-time welding current, arc voltage, droplet transition characteristics and arc position deviation; The thermal quality parameters include bath isotherm regularity and/or rate of change of temperature gradient, and the initial thermal state characteristics include temperature gradient, heat affected zone width, bath center position, bath size.
  6. 6. The multi-modal data-fusion welding robot weld real-time awareness method of claim 3 wherein the confidence generator comprises: the normalization sub-layer is used for respectively normalizing the input quality parameters of the corresponding modes to a 0-1 interval to obtain corresponding normalized quality scores; The system comprises a normalization quality parameter mapping sub-layer, a confidence level mapping sub-layer and a confidence level calculating sub-layer, wherein the normalization quality parameter mapping sub-layer is used for calculating initial confidence level of a corresponding mode according to the normalization quality parameter, for any mode, taking the normalization quality parameter corresponding to the quality parameter as the initial confidence level of the mode when the quality parameter of the mode only comprises one item, and calculating the initial confidence level of the mode by one mode of weighted average, minimum value or product of the normalization quality components when the quality parameter of the mode comprises a plurality of items.
  7. 7. The method for real-time sensing a welding robot weld joint by multi-modal data fusion according to claim 6, wherein the decision model comprises a shared feature layer, a task branch layer and a parameter output layer which are sequentially connected, and the task branch layer comprises a first regression branch, a second regression branch and a third classification branch which are arranged in parallel; The shared feature layer is used for carrying out nonlinear transformation on the input optimized features and extracting shared decision features; The first regression branch receives the shared decision feature and calculates to obtain continuous weld joint three-dimensional space track coordinates through at least one layer of fully connected network; The second regression branch receives the sharing decision feature, and calculates a continuous weld geometrical parameter predicted value through at least one layer of full-connection network, wherein the weld geometrical parameter predicted value comprises a weld transverse deviation predicted value, a height deviation predicted value and a groove width and depth predicted value; The third classification branch receives the sharing decision feature, is activated through a Softmax layer after passing through at least one layer of full-connection network, and calculates to obtain discrete welding gun inclination angle states, wherein the welding gun inclination angle states comprise left inclination, centering or right inclination of a welding gun; the parameter output layer is used for assembling the geometric parameters of the welding seam and the posture state of the welding gun into welding seam tracking parameters in a preset format.
  8. 8. The welding robot welding seam real-time sensing system with multi-mode data fusion is characterized by comprising: The multi-mode data acquisition and preprocessing module is used for synchronously acquiring multi-type sensing data in a welding process, and comprises three-dimensional point cloud data of a welding seam area in front of a welding gun, welding process data, wherein the welding process data comprises mechanical process data and electrical process data, the mechanical process data is acquired through measurement of a moment sensor arranged at the tail end of the welding gun, the electrical process data comprises a welding current signal acquired from an output end of a welding power supply and an arc voltage signal acquired from a contact tip of the welding gun and a welding workpiece, and temperature field distribution data in a welding pool and a preset area range around the welding pool; The space-time registration and feature extraction module is used for unifying the initial multi-modal data to the same coordinate system taking the welding gun as a reference, aligning the time axis based on the time stamp, ensuring the one-to-one correspondence of the initial multi-modal data acquired at the same moment so as to finish space-time registration, and then extracting the quality parameters and state features of each mode; The dynamic weight and feature fusion module is used for inputting the quality parameters and the state features of each mode at the same moment into a pre-constructed feature fusion network, so that the feature fusion network dynamically generates fusion weights corresponding to each mode one by one according to the quality parameters of each mode, dynamically weights the state features of each mode by using the fusion weights, and generates and outputs a group of optimized features at the current moment, wherein the quality parameters are used for representing the credibility of the state features of the mode to which the feature fusion network belongs at the current moment; the welding seam parameter calculating and outputting module is used for inputting the optimized characteristics into a pre-constructed decision model, and calculating welding seam tracking parameters required by real-time control of the welding robot, wherein the welding seam tracking parameters comprise three-dimensional space track coordinates of a welding seam, predicted values of width and depth of a welding seam groove and predicted values of position and posture deviation of a welding gun relative to a central line of the welding seam.
  9. 9. An electronic device comprising at least one processor and a memory, wherein the memory stores computer-executable instructions, wherein execution of the computer-executable instructions stored in the memory at the at least one processor causes the at least one processor to perform the multi-modal data fusion welding robot weld real-time awareness method of any one of claims 1-7.
  10. 10. A computer readable storage medium having stored thereon a computer program, which when run by a processor controls a device in which the storage medium is located to perform the method for real-time sensing of a welding robot weld according to any one of claims 1-7.

Description

Multi-mode data fusion welding robot welding seam real-time sensing method, system, electronic equipment and storage medium Technical Field The application belongs to the field of welding automation, relates to a welding robot and a sensing information fusion processing technology, and particularly relates to a welding robot welding seam real-time sensing method, a welding robot welding seam real-time sensing system, electronic equipment and a storage medium for multi-mode data fusion. Background With the rapid development of intelligent manufacturing and industrial automation, the intelligent welding robot is increasingly widely applied to the fields of pressure vessels, ship manufacturing, aerospace, steel structures and the like. Accurate and real-time sensing and identification of the welding seam state are the precondition for realizing the autonomous and self-adaptive control of the welding process. The existing welding seam perception technical scheme has the following defects: 1. Based on a single vision sensing scheme, the scheme generally adopts a CCD camera, laser structured light or a depth camera to acquire two-dimensional or three-dimensional image information of a welding seam, and extracts the position and groove characteristics of the welding seam through an image processing algorithm. However, under the actual welding working condition, the visual sensing is extremely easy to be influenced by strong arc radiation, welding smoke dispersion, metal splash interference and greasy dirt or light reflection on the surface of a workpiece, so that the image quality is reduced, and the characteristic extraction fails; 2. By means of a single physiological sensing scheme, such as tracking based on force control or arc sensing, although the method is insensitive to optical interference, accurate three-dimensional geometric information (such as groove width, depth and the like) of a welding line is difficult to obtain, the spatial resolution is low, and the response capability to working condition changes such as workpiece assembly errors, uneven groove processing and the like is limited; 3. Simple sensor data complementation scheme. Some prior studies have attempted to enhance perceptibility by combining visual guidance with force-controlled contact. However, these methods mostly adopt simple data switching logic or a data fusion strategy based on fixed weights, and fail to consider the confidence difference of different sensors under the dynamically changing working condition. When the data of the visual sensor is invalid due to strong light or smoke interference and the noise is generated by the force sensor due to abrupt change of the contact state, the traditional fixed fusion mode is difficult to dynamically evaluate and adaptively adjust the credibility of the multi-source information, so that the fusion precision is reduced, and the system robustness is insufficient. Therefore, a new method for realizing real-time sensing of a welding line with high robustness and high precision by carrying out dynamic credibility evaluation and intelligent fusion on multisource heterogeneous information such as vision, force sense and the like under the working conditions of strong interference and multiple noises is needed. Disclosure of Invention Aiming at the defects of the related prior art, the application provides a multi-mode data fusion welding robot welding seam real-time sensing method, a multi-mode data fusion welding robot welding seam real-time sensing system, electronic equipment and a storage medium, which can effectively resist complex working condition interference and sense welding seam information in real time and accurately. In order to achieve the above object, the present invention adopts the following technique: a welding robot welding seam real-time sensing method with multi-mode data fusion comprises the following steps: Step 1, multi-mode data acquisition and pretreatment: synchronously acquiring multiple types of sensing data in a welding process, including: three-dimensional point cloud data of a welding line area in front of a welding gun; The welding process data comprises mechanical process data and electrical process data, wherein the mechanical process data is obtained by measuring a torque sensor arranged at the tail end of a welding gun, and the electrical process data comprises a welding current signal acquired from the output end of a welding power supply and an arc voltage signal acquired from between a welding gun contact tip and a welding workpiece; Temperature field distribution data of the welding pool and the peripheral preset area range; Adding a uniform time stamp to the acquired sensing data, and respectively performing filtering pretreatment to obtain initial multi-mode data; step2, space-time registration and feature extraction Unifying the initial multi-mode data to the same coordinate system taking a welding gun as a reference, aligning time axes based on a tim