CN-122023466-A - Lifting field weak vibration monitoring method based on computer vision
Abstract
The invention discloses a lifting field weak vibration monitoring method based on computer vision, which comprises the steps of S1, scaling and cutting lifting field videos acquired by a camera frame by frame, storing cut images, outputting the width and the height of the images to obtain a video frame image sequence, S2, distributing cross bar amplification factors based on amplitude sequencing, S3, carrying out Euler motion amplification and cross bar combination based on Real-ESRGAN, and S4, extracting vibration signals based on multi-sampling point motion track tracking of a Farneback optical flow method, acquiring displacement time curve and spectrogram of the amplified videos and carrying out modal analysis. According to the method, a sensor is not required to be installed on the surface of a landing field, so that motion distortion and blurring caused by depth of field due to the position of a camera are reduced, the amplification of dynamic adjustment key information is realized, and the precision and the visual effect of weak vibration monitoring are remarkably improved.
Inventors
- ZHANG CHI
- LI TIANTIAN
- LI XIAOSONG
- LI CHANGHUI
- ZHANG CHENYU
- LIU XUAN
- ZHANG JIAJIA
Assignees
- 中国民航大学
Dates
- Publication Date
- 20260512
- Application Date
- 20250823
Claims (10)
- 1. The method for monitoring the weak vibration of the landing field based on the computer vision is characterized by comprising the following steps of: S1, scaling and cutting the landing field video acquired by a camera frame by frame, storing the cut image and outputting the width and the height of the image to obtain a video frame image sequence; S2, distributing the amplification factors of the cross bars based on amplitude sequencing, namely, excavating inter-frame motion information by carrying out multidimensional analysis on the video frame image sequence, taking the cross bars as basic processing units, and calculating the motion displacement characteristics of the cross bars at the same position in each video frame; S3, euler motion amplification and horizontal bar combination based on Real-ESRGAN, namely performing super-resolution reconstruction on the video frame image sequence obtained in the S1 by utilizing Real-ESRGAN, optimizing video quality, and realizing amplification of fine motion in the video by combining an Euler motion amplification algorithm; and S4, tracking and extracting vibration signals based on a multi-sampling point motion trail of Farneback optical flow method, obtaining a displacement time-course curve and a spectrogram of the amplified video, and carrying out modal analysis.
- 2. The method for monitoring the weak vibration of the landing field, which is disclosed by claim 1, is characterized in that in S1, landing field videos are collected through cameras arranged around the landing field, the range of a target detection area is adjusted according to the angle, the distance and the size of the collected videos, an image processing library is called for preprocessing the videos, and the preprocessing comprises the following steps: S11, defining a video scaling ratio and a mouse callback function; s12, opening the video, reading a first frame image, and scaling the first frame image according to a defined video scaling ratio; s13, applying the mouse callback function in S11, drawing a rectangular frame on the first frame read in S12 by moving a mouse, cutting the selected target area, storing the cut image and outputting the width and the height of the image; s14, preprocessing all other frames in the video in sequence according to the method of S12-S13 to obtain a video frame image sequence.
- 3. The lifting field weak vibration monitoring method according to claim 1, wherein S2 comprises the steps of: s21, space-time initialization of a video frame image sequence: carrying out stream reading operation on the video frame image sequence obtained in the step S1 by adopting a video stream capturing interface to construct a space-time data structure of the video frame image sequence; s22, horizontal bar space subdivision, namely setting a horizontal bar height parameter, and uniformly subdividing the reference frame in the vertical space dimension to divide the reference frame into a plurality of horizontal bar areas; s23, discretizing sampling points, namely setting sampling interval parameters, and regularly sampling in the height and width directions in each horizontal bar area obtained in the S22 to generate a discrete sampling point set, wherein the spatial distribution of the sampling points follows the following rule: The y coordinate y i of the sampling point in the height direction satisfies: Wherein y start is the initial height of the current horizontal bar, deltay is the sampling interval in the height direction, k is the sampling interval number in the height direction, and k is a natural number; the formula of the width direction sampling point coordinate x j is: Wherein deltax is the sampling interval in the width direction, m is the sampling interval number in the width direction, and m is a natural number; grouping all sampling points according to the area of the corresponding cross bar to construct a sampling point subset corresponding to each cross bar; S24, optical flow field estimation, namely converting the reference frame in S21 into a gray level image, carrying out iterative processing on the video frame image sequence obtained in S1 frame by frame, carrying out gray level conversion operation on each frame image, and calculating optical flow fields between two adjacent frames of gray level images, wherein the optical flow fields describe two-dimensional motion vectors of each pixel point in the image between the adjacent frames Wherein f x and f y represent the x and y direction motion components, respectively; s25, extracting displacement characteristics, namely extracting corresponding two-dimensional motion vectors of sampling points in corresponding horizontal bar areas in adjacent frames from the optical flow field calculated in the step S24, and calculating the displacement amplitude d of each sampling point by using a Euclidean distance formula: Carrying out arithmetic average on the displacement amplitude values of all sampling points in each cross bar area to obtain the displacement amplitude value of the cross bar in the current frame Wherein n is the number of sampling points in the horizontal bar, and d k is the displacement amplitude of the kth sampling point; S26, performing statistical modeling and sequencing on the displacement characteristics of the cross bars, namely traversing the whole video frame image sequence obtained in the S1, and performing displacement amplitude value on each cross bar in each frame Statistical analysis is carried out, and the average value of the time dimension is calculated to obtain the average displacement amplitude value of each horizontal bar The average displacement amplitude of each cross bar is correlated with the original spatial index to construct a list containing index-displacement amplitude pairs, and the index-displacement amplitude pairs are ordered according to the order of the displacement amplitude from small to large to obtain an ordered list; S27, dynamic amplification factor distribution based on sequencing, namely setting a maximum amplification factor alpha max , carrying out segmentation processing on the list sequenced in the S26, and dividing the list into alpha max continuous segments by adopting a strategy of integer division and remainder distribution.
- 4. The method for monitoring weak vibration of landing field according to claim 3, wherein in S27, N is set as the total number of bars, and the calculation formula of the segment size is as follows: Each segment is assigned a unique magnification α, which increases from 1 until a maximum magnification α max is reached.
- 5. The lifting field weak vibration monitoring method according to claim 1, wherein S3 comprises the steps of: S31, color space mapping, namely re-reading the video file preprocessed in the S1, mapping the video frame from BGR color space to RGB color space by utilizing a color space conversion matrix by means of a frame-by-frame processing mechanism, and converting the video frame into NTSC color space by utilizing an NTSC conversion matrix; S32, constructing a Laplacian pyramid by adopting a difference operation of downsampling and upsampling of the Gaussian pyramid for each channel of each video frame in the NTSC color space after the processing of S31: L a =G a -Expand(G a+1 ) Wherein G a is the a-th layer of the gaussian pyramid, L a is the a-th layer of the laplacian pyramid, and Expand represents the upsampling operation; S33, time-frequency domain combined filtering and motion amplification, namely setting filtering parameters according to the amplification requirement by combining the read video basic information, and carrying out amplification processing on motion information in different frequency ranges; S34, reconstructing the Laplacian pyramid and inversely converting the color space, wherein the method comprises the steps of reconstructing the Laplacian pyramid on the difference signal amplified in the step S33, carrying out attenuation treatment on a chromaticity channel, and converting the processed NTSC image back to the RGB color space through an inversely converting matrix; S35, real-ESRGAN enhanced horizontal bar segmentation and independent video generation, wherein the method comprises the following steps of; After the video obtained in the step S34 is subjected to different amplification factors, the video is converted into a frame sequence through ffmpeg, then each frame is subjected to enhancement processing by calling a Real-ESRGAN model, the enhanced image is stored, and finally the enhanced amplified video is synthesized again; According to the specified bar height h strip , uniformly dividing the first frame image after S351 enhancement processing into a plurality of bar areas in the vertical direction, traversing the video frame sequence, dividing each frame image, and calculating the starting position and the ending position of each bar area as follows: y start =w×h strip y end =(w+1)×h strip w is the number of transverse bars, and w is 0-N; The video generation, namely extracting a corresponding frame sequence from the enhanced video obtained in the step S351 according to the magnification factor distributed to each horizontal bar in the step S27, and using a video coding algorithm to code and store the frame sequence corresponding to each horizontal bar into an independent horizontal bar video file, setting the frame rate and the resolution of the video and ensuring the consistency with the original video; And S36, aligning and combining the time sequence of the horizontal bar video, namely setting a storage path of the result video, and combining the horizontal bar video according to the amplification factor list obtained in the step S2 to obtain a complete amplified video.
- 6. The lifting field weak vibration monitoring method according to claim 5, wherein S33 comprises the steps of: Calculating time filtering parameters, namely setting a low-frequency cutoff frequency f low and a high-frequency cutoff frequency f high , and calculating time filtering parameters r 1 and r 2 by combining the video frame rate fr: Performing double low-pass filtering operation, namely performing time filtering on the Laplacian pyramid obtained in the step S32, respectively using r 1 and r 2 to update states of the two low-pass filters, and setting And The output of the two low-pass filters at the time t is respectively, and P t is the Laplacian pyramid at the time t, and the update formula is: Calculating a difference signal F t output by the two low-pass filters, wherein the difference signal contains motion information in a specified frequency range, and the difference signal has the following calculation formula: the space frequency band is amplified, a space frequency parameter lambda c is set, an amplification factor of each space frequency band is calculated according to the amplification factor alpha corresponding to each cross bar, and initialization intermediate parameters delta and lambda val are set, wherein the calculation formula is as follows: Wherein h and w are the height and width of the video, respectively; starting from the last layer of the laplacian pyramid, the magnification factor α curr of the current layer is calculated layer by layer: and amplifying or zeroing the difference signal F t according to the size of the amplification factor and the sequence number of the layer so as to highlight the motion information under a specific scale.
- 7. The method for monitoring weak vibration of lifting field according to claim 5, wherein the method for performing pyramid reconstruction in S34 is that the laplace pyramid reconstruction operation is performed on the difference signal amplified in step S33, and the detail information of the original image is gradually recovered from the last layer of the laplace pyramid through up-sampling and superposition operations, the reconstruction result is added to the original NTSC frame to achieve the motion amplification effect of the video, and if R a is set as the result of the a-th layer in the reconstruction process, L a is the a-th layer of the laplace pyramid, then: R a =Expand(R a+1 )+L a 。
- 8. the lifting field weak vibration monitoring method according to claim 5, wherein S36 comprises the steps of: Initializing a video writer, namely setting a storage path, a frame rate and a size parameter of the combined video; The frame level time sequence and the size are aligned, namely, the frame level time sequence and the size are used as a unit, and the original position sequence of the cross bars are used for reading cross bar images at the same moment from each independent cross bar video file generated in the step S35; and performing splicing and frame merging processing, namely performing splicing operation on all the cross bar images of the same frame in the vertical direction, using a vertical stacking function of a matrix to realize image splicing to obtain a complete merged frame, writing the processed merged frame into a merged video file until all the video frames are processed, and storing the merged video file into a set storage path.
- 9. The lifting field weak vibration monitoring method according to claim 1, wherein S4 comprises the steps of: S41, creating an empty list for subsequently storing the feature point coordinates manually selected by the user; s42, reading a first frame image from the video obtained in the S3, creating an interactive window, and binding a mouse callback function; the user clicks the window through the left button of the mouse, and the callback function captures the coordinate of the clicked position, stores the coordinate into the empty list and draws a red mark point on the image; s43, converting the first frame image into a gray level image, and using the gray level image as an initial reference frame for optical flow calculation to create a two-dimensional displacement list for storing displacement histories of all sampling points; S44, reading videos frame by frame, converting the initial reference frames into gray level images, and calculating optical flow fields between adjacent frames; for each sampling point, extracting a motion component from the optical flow field, calculating Euclidean distance as a displacement amplitude and storing the displacement amplitude into a two-dimensional displacement list in the step S43; S45, calculating a time sequence taking seconds as a unit according to the video frame rate and the processing frame number, drawing a displacement time curve of a sampling point, setting a coordinate axis label, a title and a legend, and storing a time course curve; S46, performing fast Fourier transform on the displacement history data of each sampling point obtained in the S43, calculating the frequency domain amplitude of the displacement history data and storing the frequency domain amplitude in a frequency domain list, and drawing and storing a spectrogram of the sampling point.
- 10. The method for monitoring the weak vibration of the landing field according to claim 9, wherein in S45, a displacement time curve of the sampling point is drawn by using matplotlib, coordinate axis labels, titles and legends are set, and a time curve chart is saved.
Description
Lifting field weak vibration monitoring method based on computer vision Technical Field The invention relates to the technical field of lifting field vibration monitoring, in particular to a lifting field weak vibration monitoring method based on computer vision. Background The structural health monitoring is to master state parameters such as structural deformation, vibration, load distribution and the like under the condition of not damaging the structure, judge whether the inside of the structure has the problems of damage, cracks, deformation and the like, and provide important basis for maintenance, repair and safety evaluation of the structure. Vibration is an important expression form of the dynamic behavior of the structure, and the dynamic characteristics of the structure and whether potential damage exists can be reflected by measuring and analyzing the vibration response of the structure under different working conditions. The vibration monitoring technology is mainly divided into a contact type vibration monitoring method and a non-contact type vibration monitoring method, wherein the contact type technology mainly adopts an accelerometer or a speed sensor to acquire vibration information, and the non-contact type technology adopts laser or ultrasonic waves to acquire the vibration information. Along with the continuous development of image processing technology, vibration monitoring by using computer vision technology is continuously broken through and widely applied. The computer vision technology can collect image or video information of the structure through optical equipment such as a camera and the like, and then extract vibration characteristics of the structure through an image processing algorithm. The method has the advantages of non-contact, remote monitoring, no limitation of structural surface materials and the like. Structural vibration monitoring based on computer vision technology is mainly divided into two methods of manual marking and unmanned marking. The former is mainly to provide reflective or colored marks on the structure, and the displacement is measured by tracking the movement of the marks, while the latter is to identify and track the edge or corner features of the surface of the structure, and the displacement is measured by feature matching. The rise of urban air traffic makes vibration detection on the landing field necessary, but the vibration monitoring research on the landing field is still in a starting stage, mainly because the vibration signal of the landing field is relatively weak, and the traditional vibration monitoring method is difficult to effectively apply. Although the contact method is widely applied, the contact method is required to be in direct contact with a structure, only one point can be fixedly monitored, multipoint measurement is difficult, the normal work can be influenced by the structure such as a landing field, the installation and maintenance process is complex, and the capturing capability of micro vibration is limited. The non-contact method, such as a laser vibration meter, has high requirements on structural surface characteristics and environmental interference, and relatively high maintenance cost. To better monitor minute vibrations, motion amplification algorithms have been developed. The motion amplification technology is an important technical means in the field of computer vision, and amplifies tiny motions in videos through an image processing algorithm, so that tiny vibrations or motions which are difficult to detect originally become obvious, and further analysis and monitoring are facilitated. However, when the camera is too close to or too far from the target object, or the target object is located in a different depth of field range, distortion or blurring may occur in the amplified motion in the video, which affects the accuracy and reliability of the monitoring. Disclosure of Invention The invention aims to provide a lifting field weak vibration monitoring method based on computer vision, which is used for solving the problems of limitation, difficulty in recognition of small vibration and the like when a traditional sensor is applied to a vertical lifting field, and weakening motion blur and distortion caused by depth of field after motion amplification. For this purpose, the invention adopts the following technical scheme: a take-off and landing field weak vibration monitoring method based on computer vision comprises the following steps: S1, scaling and cutting the landing field video acquired by a camera frame by frame, storing the cut image and outputting the width and the height of the image to obtain a video frame image sequence; S2, distributing the amplification factors of the cross bars based on amplitude sequencing, namely, excavating inter-frame motion information by carrying out multidimensional analysis on the video frame image sequence, taking the cross bars as basic processing units, and calculating t