Search

CN-121980949-A - Bridge structure intelligent sensing and evaluating method based on combination of machine vision and radar

CN121980949ACN 121980949 ACN121980949 ACN 121980949ACN-121980949-A

Abstract

A bridge structure intelligent perception and evaluation method based on combination of machine vision and radar comprises the steps of 1, arranging a visual target on a key cross section of a bridge, installing a deformation radar, a visual sensor and an accelerometer, completing geometric calibration and time synchronization, and establishing a unified space-time reference. 2. The method comprises the steps of continuously acquiring and processing echo signals by using a radar to obtain a one-dimensional deformation time sequence, extracting a target center sub-pixel displacement sequence by adopting digital image correlation or deep learning target detection, converting the target center sub-pixel displacement sequence into node physical displacement and rotation angle by combining internal parameters and external parameters of a camera and target geometric parameters, and generating a multi-source displacement observation set by using a timestamp interpolation and coordinate transformation matrix. 3. And constructing a state and measurement equation, estimating a structural state variable and calculating a health risk index. And 4, constructing a short-term prediction model by adopting LSTM, obtaining a future structure state prediction value, and carrying out hierarchical early warning by combining the risk index and the safety boundary. The invention integrates the advantages of machine vision and deformation radar, and realizes all-weather, high-precision and full-field monitoring.

Inventors

  • XIE KAIZHONG
  • GUO XIAO
  • LIU FEI
  • Luo Renting
  • LIN ZHENGYU
  • SUN HEYANG

Assignees

  • 广西大学

Dates

Publication Date
20260505
Application Date
20260130

Claims (9)

  1. 1. The intelligent bridge structure sensing and evaluating method based on the combination of machine vision and radar is characterized by comprising the following steps of: Step 1, arranging a visual target on a key cross section of a bridge, installing a deformation radar, a visual sensor and an accelerometer, completing geometric calibration and time synchronization, and establishing a spatial mapping relation among a radar sight coordinate system, a camera imaging coordinate system and a bridge structure coordinate system through site calibration to form a unified space-time perception reference; Step 2, based on the space-time perception reference in the step 1, continuously acquiring echo signals of a key cross section of a bridge by using a deformation radar, and obtaining a one-dimensional deformation time sequence along the sight line direction through frequency modulation demodulation, phase unwrapping, atmospheric phase correction and filtering denoising treatment; the method comprises the steps of acquiring bridge sequence images through a visual sensor, extracting a visual target center sub-pixel level displacement sequence by adopting a digital image correlation algorithm or a target detection model based on deep learning to obtain two-dimensional or three-dimensional structural displacement, combining internal parameters, external parameters and visual target geometric priori parameters of a camera obtained through calibration to convert the internal parameters, the external parameters and the visual target geometric priori parameters into node physical displacement and rotation angles, further carrying out interpolation alignment on one-dimensional deformation time sequences, the node physical displacement and the rotation angles according to a uniform timestamp, projecting radar line-of-sight direction deformation to a bridge structural coordinate system through a coordinate transformation matrix, aligning the radar line-of-sight direction deformation with the visual displacement in the same spatial dimension, and generating a multi-source displacement observation set with consistent time and space; Step 3, constructing a state equation and a multi-source heterogeneous measurement equation based on structural dynamics parameters by utilizing the multi-source displacement observation set in the step 2, estimating structural state variables of a key cross section of a bridge by adopting a Kalman filtering data fusion algorithm, outputting state estimation values and state estimation covariance matrixes at the current moment, accumulating to form a state time sequence, and calculating health risk indexes comprising synchronous difference threshold judgment, frequency mutation detection, deflection overrun alarm, corner mutation detection and main vibration mode distortion detection; And 4, taking the state time sequence generated in the step 3 as input, constructing a short-term response prediction model, obtaining a structure state prediction value containing node deflection, rotation angle and frequency offset in a future time window, respectively comparing the structure state prediction value and the health risk index of the step 3 with a preset safety boundary to generate an early warning index, and triggering grading early warning and engineering management and control measures according to grades by combining the health risk index of the step 3.
  2. 2. The method according to claim 1, wherein in the step 1, the key cross section of the bridge is a key stress part of a bridge girder, a support and a guy cable anchoring area; the deformation radar is arranged on a stable base at the top of the bridge tower on two sides, adopts a millimeter wave radar of Ku wave band frequency modulation continuous wave, has a center frequency of 16.7GHz and a wavelength lambda=3.2 mm, has a sampling rate of 1kHz, and covers all deflection sensitive areas of the main beam through three groups of array antennas; the vision sensor is arranged on the lateral support of the bridge pier, covers all vision targets, and is provided with a rainproof and dustproof cover and an automatic light supplementing lamp; The accelerometer is installed in the bridge span and above the support through magnetic attraction or bolts and is used for collecting structural vibration signals in real time.
  3. 3. The method according to claim 2, wherein the geometric calibration in step 1 comprises placing a plane calibration plate containing 48 coding points on a plurality of height layers of a bridge deck, photographing at least 10 groups of images with different visual angles, solving an adjustable matrix by using a direct linear transformation algorithm, and establishing a mapping relation between pixel coordinates and a global coordinate system of a structure, wherein the establishing of the mapping relation between the pixel coordinates and the global coordinate system of the structure is described by a coherent matrix: (1), Where (x i, y i ) is the point coordinates on the image plane, (x i ',y i ') is the corresponding point coordinates in the physical world coordinate system, H 3×3 is the homography matrix, H 11 、h 12 、h 21 、h 22 is the rotation and scaling factor between the image plane and the physical plane, H 13 、h 23 is the translation in the horizontal and vertical directions, H 31 、h 32 is the deformation parameter caused by perspective projection, and H 33 is the normalization factor, which is usually set to 1.
  4. 4. The method of claim 1, wherein the digital image correlation algorithm of step 2 measures image similarity using a normalized cross-correlation formula, the normalized cross-correlation formula being: (7), In the formula, Is the coefficient of correlation (co-efficient), Is the intensity of the search image and, Is the mean value of the values, Is a template image; representing local pixel coordinates in the template image; Representing the displacement offset of the image to be searched relative to the template; representing an average value of the gray scale of the image of the corresponding region; The conversion into the node physical displacement and the corner comprises the steps of realizing mapping from pixel units to physical units through scaling factors, and further calculating the node displacement and the corner by combining internal parameters, external parameters and target geometric parameters of a camera, wherein the scaling factors have the following calculation formulas: (9), Wherein K is a scaling factor, H i is a length unit in the physical world, M Z is the number of pixels on the image plane; Combining the scaling factor K and the sub-pixel shift sequence Calculating physical displacement of node With physical angle of rotation The calculation formulas are respectively as follows: (43), (44), In the formula, Representing the physical displacement components of the target A and the target B in the vertical direction on the same section; Is a known horizontal physical spacing between target a and target B; 、 The displacement offset of the image to be searched relative to the template; K is a scaling factor; The interpolation alignment adopts linear interpolation, and the existing time stamp is aligned in a calculation level through the linear interpolation, and the formula is as follows: (10), In the formula, And For two known "time-data" sample points, And the estimated data corresponding to the target moment.
  5. 5. The method according to claim 1, wherein the state equation based on the structural dynamics parameters of step 3 is as follows: (17), Wherein x k+1 represents a predicted structural state vector at time k+1; the system is a state transition matrix and comprises time evolution items of a mass matrix, a damping matrix and a rigidity matrix, wherein x k represents an optimal estimated structural state vector at the moment k; u k represents the system control input at time k; is process noise; The multi-source heterogeneous measurement equation comprises a visual displacement measurement equation, an acceleration measurement equation and a synchronous difference and frequency offset measurement equation; the visual displacement measurement equation is as follows: (19), In the formula, In order to measure the vector of the vision, In order to visually observe the matrix, Is visual noise; the acceleration measurement equation is as follows: (20), In the formula, An acceleration observation vector indicating the time k; Representing an acceleration observation matrix; Representing acceleration noise; calculating synchronization differences by time stamp comparison Spectral analysis yields frequency offset The synchronization difference and frequency offset measurement equation is constructed as follows: (21), In the formula, In order to measure the vector quantity, In order to observe the matrix, As a state vector of the state vector, Measuring a noise vector; the synchronization difference threshold value judging formula is as follows: (27), In the formula, A synchronous difference measurement value at the kth moment; A synchronous difference measurement value at the time of k-1; Statistical standard deviation of the synchronization difference; the frequency mutation detection formula is as follows: (28), wherein R f represents a frequency mutation detection index; the frequency variation at the kth time; is the reference frequency of the system; the formula of the deflection overrun alarm is as follows: (29), wherein R d represents an overrun warning index; for the deflection of the i-th measuring point, As an allowable safety limit for the deflection, The formula for detecting the corner mutation, which shows the maximum value of the deflection and limit value difference values in all the measuring points, is as follows: (30), wherein R r represents a corner mutation detection index; As the rotation angle at the current moment, For the initial reference angle If the time lasts for more than 10 minutes, judging that the rotation angle is abnormal; the main vibration mode distortion detection formula is as follows: (31), wherein R m represents a main mode distortion detection index; representing a transposed matrix of the reference primary mode matrix; the current main vibration mode matrix; for reference to the main vibration mode matrix, if The main vibration mode is obviously distorted, and the structural damage is predicted.
  6. 6. The method according to claim 1, wherein the kalman filter data fusion algorithm of step3 performs the iterative process of: Step 31, state prediction, namely, using posterior state estimation at the time of k-1 and a system dynamic model to infer prior state estimation at the time of k, wherein the prior state estimation has the following formula: (22), In the formula, The state prior estimation at the moment k; state posterior estimation at time k-1; is a state transition matrix; Controlling the control input matrix; Predicting the state of the current moment by using the optimal state and the contribution of the control input at the last moment as the control input vector at the moment k-1; step 32, covariance prediction, namely transmitting uncertainty of posterior state estimation at k-1 moment, and superposing influence of process noise to obtain a covariance matrix of prior state estimation at k moment, wherein a covariance matrix formula is as follows: (23), In the formula, A covariance matrix estimated a priori for the state at the k moment; a covariance matrix estimated for a k-1 moment state posterior; t represents a transpose; A process noise covariance matrix; Step 33, state updating, namely calculating Kalman gain by using an observation value at the moment k and correcting the prior state estimation at the moment k based on the prior state estimation at the moment 31 and the prior covariance matrix at the moment 32 to obtain posterior state estimation at the moment k, and calculating a covariance matrix of the posterior state estimation at the moment k by using a covariance updating formula to describe uncertainty of the posterior state estimation; The calculation formula of the Kalman gain is as follows: (24), In the formula, Is a Kalman gain matrix; is an observation matrix; The covariance matrix is observed; a transpose matrix of the observation matrix at the moment k is represented; The posterior state estimation at the moment k is calculated by adopting a state updating formula of Kalman filtering, and the calculating formula is as follows: (25), Where, in the case of linear Kalman filtering, Simplified into I.e. To observe residual errors, wherein The state posterior estimation at the moment k represents the optimal result of the fusion model and the observation place; the observation vector at the moment k; ) Estimating a predicted observed value for the prior state; is a Kalman gain matrix; the formula of the covariance matrix of the k moment state posterior estimation is as follows: (26), In the formula, For the covariance matrix of the k-moment state posterior estimate, As a unit matrix, the observation injects new information for state estimation, so that posterior uncertainty should be smaller than prior uncertainty; The correction matrix of the residual error to the covariance is used for reflecting the improvement of the precision after the information fusion by observing the uncertainty of the reduced state.
  7. 7. The method of claim 1, wherein the short-term response prediction model in step 4 is a long-term memory network, and LSTM cells of the long-term memory network are provided with forgetting gates, input gates, temporary cell states, and output gates; The formula of the forgetting door is as follows: , In the formula, As a Sigmoid function, the output value range is [0,1], 、 The forgetting gate weight matrix and the bias term are respectively used for optimizing the gate strategy through training, A joint feature representation of the current input and the historical hidden state; The formula of the input gate is: (34), In the formula, Is a time step The output of the input gate is in the range of (0, 1), 、 Respectively inputting a gate weight matrix and a bias term; the temporary cell state is expressed as: (35), In the formula, Is a time step In the temporary state of the cells in the process, For hyperbolic tangent activation function, the linear transformation result is compressed to the (-1, 1) interval, , A weight matrix and bias term for the temporary cell state; The formula for the cell state is as follows: (36), In the formula, Is in a cellular state; The output (0, 1) of the forget gate, Is the state of the cell at time (t-1), The output (0, 1) for the input gate, In order to be a candidate cell state, For element-by-element multiplication; the formula of the output gate is: (37), (38), In the formula, The output value of the output gate in the time step t is represented, and the value range is 0 to 1; Representing a Sigmoid activation function; , the weight matrix and the bias term of the output gate are respectively; a hidden state vector representing the time t-1; an input vector representing time t; the hidden state output generated at time t is indicated.
  8. 8. The method of claim 7, wherein the long-short term memory network prediction model in step 4 is provided with an output mapping layer, and the LSTM hidden state is transformed into a structural state prediction value through a weight matrix and a bias term, and a prediction sequence of K time steps in the future is output, where the formula is as follows: (39), In the formula, And Respectively a weight matrix and a bias term of the output layer; is a measure of the structural state of the model at time t+k (wherein ) H t+k is a hidden state vector that represents the model generation at time t+k.
  9. 9. The method of claim 1, wherein the formula for generating the early warning indicator in step 4 is as follows: (42), I (t+k) represents an early warning index at the time t+k; Representing the predicted value; Representation of A corresponding design upper allowable limit; the grading early warning comprises a first grade early warning, a second grade early warning and a third grade early warning; The early warning index of the first-stage early warning is that I is less than or equal to 0.8 and less than 0.9, and engineering management measures are that the response of a prompt structure approaches to a design value and the monitoring frequency is required to be enhanced; The early warning index of the secondary early warning is that I is less than or equal to 0.9 and less than 1.0, and engineering management and control measures are that the overrun risk exists in a prompt structure, and the current process is suggested to be stopped for investigation; The early warning index of the three-stage early warning is that I is not less than 1.0 or the predicted value exceeds Confidence interval, prompt structure is in unsafe state, engineering management and control measures that construction must be stopped immediately and emergency measures are taken.

Description

Bridge structure intelligent sensing and evaluating method based on combination of machine vision and radar Technical Field The invention relates to the technical field of bridge health monitoring, in particular to an intelligent bridge structure sensing and evaluating method based on combination of machine vision and radar. Background As modern transportation infrastructure loads continue to climb, the safety of the bridge as a critical load bearing structure is severely challenged. The traditional health monitoring system is difficult to meet the accurate prevention and control requirement due to the technical architecture defect, and the core bottleneck is represented by triple faults: in the sensing dimension, although the deformation radar has millimeter-level displacement monitoring precision, micrometer-level early damage signals cannot be captured and dynamic response is delayed, machine vision is severely interfered by illumination, shielding and haze, feature matching failure is easily caused on the surface of a complex steel structure, and the two are difficult to realize full-scale deformation coupling analysis of local strain and integral vibration mode. The heterogeneous sensor adopts a loose coupling architecture, a unified space-time reference frame is lacking, fusion errors are accumulated due to the difference of coordinate systems of radar sight displacement and vision node displacement, sampling frequencies of multi-source data are asynchronous, for example, timing misalignment is caused by video stream 30fps and radar 1kHz, and adaptability of the traditional Kalman filtering to bridge large deformation nonlinear working conditions is insufficient. The prediction and early warning link is limited by a static threshold warning mechanism, the model such as LSTM and the like does not fully integrate prior knowledge of the multi-mode sensor, early warning indexes are single, multi-dimensional risk association such as frequency offset, synchronization difference mutation and the like is ignored, and the hierarchical early warning mechanism is lacking, so that progressive intervention from hidden danger identification to emergency treatment is difficult to support. Research shows that bridge damage evolution has strong nonlinearity and time-varying characteristics, and the coverage rate of a single sensor is insufficient, so that critical damage is not reported. The systematic defect forces the industry to construct a new system for combining multisource collaborative perception and intelligent reasoning, and breaks through the ceilings in the prior art through a unified space-time reference, dynamic data fusion and predictive early warning mechanism. Disclosure of Invention Aiming at the problems in the prior art, the invention provides an intelligent sensing and evaluating method for a bridge structure based on combination of machine vision and radar, which aims to break through the limitation of a single sensor in space coverage, dynamic response and weak signal capture, and utilizes an intelligent algorithm to fuse heterogeneous data, so that the anti-interference performance, weak deformation sensitivity and dynamic response instantaneity of a monitoring system in a complex environment are improved, and early damage identification, state evaluation and grading early warning of a bridge are finally supported. In order to achieve the above object, the present invention is specifically as follows: the intelligent bridge structure sensing and evaluating method based on the combination of machine vision and radar comprises the following steps: Step 1, arranging a visual target on a key cross section of a bridge, installing a deformation radar, a visual sensor and an accelerometer, completing geometric calibration and time synchronization, and establishing a spatial mapping relation among a radar sight coordinate system, a camera imaging coordinate system and a bridge structure coordinate system through site calibration to form a unified space-time perception reference; Step 2, based on the space-time perception reference in the step 1, continuously acquiring echo signals of a key cross section of a bridge by using a deformation radar, and obtaining a one-dimensional deformation time sequence along the sight line direction through frequency modulation demodulation, phase unwrapping, atmospheric phase correction and filtering denoising treatment; the method comprises the steps of acquiring bridge sequence images through a visual sensor, extracting a visual target center sub-pixel level displacement sequence by adopting a digital image correlation algorithm or a target detection model based on deep learning to obtain two-dimensional or three-dimensional structural displacement, combining internal parameters, external parameters and visual target geometric priori parameters of a camera obtained through calibration to convert the internal parameters, the external parameters and the visual target g