CN-121724862-B - Liquid rocket engine simulated test video noise reduction system and noise reduction method
Abstract
The invention provides a liquid rocket engine simulated test video noise reduction system and a noise reduction method, which are used for solving the technical problem that the existing image noise reduction method is difficult to meet the actual engineering requirements of liquid rocket engine simulated test video noise reduction. The invention provides a liquid rocket engine simulation test run video noise reduction system, which adopts a color space conversion module to convert an input image frame in an original RGB format into a YUV format, realizes separation of a brightness component and a chromaticity component, performs multi-step processing on a Y channel containing main noise through a Y channel processing unit, forms a space domain-time domain joint self-adaptive mechanism, enables edge weight and local variance to realize space domain self-adaptation so as to adapt to a severe motion scene, responds to the change of a current frame more quickly and resists a tailing phenomenon, and adopts bilateral filtering to execute lightweight denoising on a U channel and a V channel only containing color information, thereby achieving the effects and efficiency, and remarkably improving the noise reduction effect and video stability.
Inventors
- ZHANG DI
- MA MENG
- LIU XIN
- HU ZHIJIE
- DING YONG
- FAN JIARUI
- HE XUAN
Assignees
- 西安欧亚学院
- 西安交通大学
Dates
- Publication Date
- 20260505
- Application Date
- 20260213
Claims (7)
- 1. A liquid rocket engine simulation test run video noise reduction system is characterized in that: the device comprises an input module, a color space conversion module, a Y-channel processing unit, a UV-channel processing module, a channel fusion module and an output module; The input end of the input module is used for receiving an original frame sequence of the simulated test video of the liquid rocket engine and outputting an image frame in an original RGB format frame by frame; The input end of the color space conversion module is connected with the output end of the input module and is used for converting an image frame in an original RGB format into a YUV format and dividing the YUV format into three channels for output, wherein a Y channel outputs a brightness component image, a U channel outputs a blue chrominance component image and a V channel outputs a red chrominance component image; the Y channel processing unit comprises a pre-denoising module, an edge weight calculation module, a local variance calculation module, a airspace self-adaptive Q_map construction module, a multi-frame weighted memory prediction model, an improved Kalman filter and a high-frequency compensation module; The input end of the pre-denoising module is connected with the output end of a Y channel in the color space conversion module and is used for performing pre-denoising processing on a brightness component image to obtain a pre-denoised Y channel image, the input ends of the edge weight calculation module and the local variance calculation module are respectively connected with the output end of the pre-denoising module, the edge weight calculation module is used for detecting the edge area of the pre-denoised Y channel image to obtain edge weight, the local variance calculation module is used for estimating the local noise level of each pixel in the pre-denoised Y channel image to obtain local variance, the two input ends of the airspace self-adaption Q_map construction module are respectively connected with the output ends of the edge weight calculation module and the local variance calculation module and are used for combining the edge weight and the local variance to construct airspace self-adaption process noise covariance Q_map of a pixel level, and the airspace self-adaption Q_map construction module is used for constructing airspace self-adaption process noise covariance Q_map of the pixel level through the following formula: ; Where Q_base is the baseline for process noise, For the frame-level adaptive coefficients, As a local variance of the values of the local variance, Is an edge weight; The input end of the multi-frame weighted memory prediction model is connected with the output end of a Y channel in the color space conversion module, and is used for adding the brightness component image output by the current frame of the Y channel of the color space conversion module into the frame buffer, and predicting based on the weighted average of the multi-frame images in the frame buffer to obtain a predicted value The multi-frame weighted memory prediction model calculates a predicted value through the following steps : ; Wherein, the For the exponentially decaying weight of the frame buffer i-th frame, The brightness value of the ith frame, which is the frame brightness value of the frame buffer area t before the moment, M is the current frame number of the frame buffer area, and i is the frame number; The three input ends of the improved Kalman filter are respectively connected with the output ends of the airspace self-adaptive Q_map construction module, the multi-frame weighted memory prediction model and the pre-denoising module, and are used for carrying out noise covariance Q_map and predicted value according to the airspace self-adaptive process Calculating Kalman gain pixel by pixel to fuse predicted values The Y channel image after Kalman filtering is obtained with the Y channel image after pre-denoising; the improved kalman filter computes a kalman filtered Y-channel image by: ; Wherein, the Updating the denoising value for each pixel, namely updating the pixel value of the Y channel image after Kalman filtering; as the predicted value of the current frame, For the observed value of the current frame, i.e. the pre-denoised Y-channel image of the current frame, In order for the kalman gain to be achieved, , ∈[0,1], In order to predict the covariance of the signal, To measure noise covariance; the two input ends of the high-frequency compensation module are respectively connected with the output ends of the improved Kalman filter and the local variance calculation module, and are used for receiving the Y-channel image and the local variance after Kalman filtering and repairing the smoothed high-frequency details in the Kalman filtering process to obtain the Y-channel image after high-frequency enhancement; The two input ends of the UV channel processing module are respectively connected with the output ends of the U channel and the V channel of the color space conversion module, and are used for carrying out lightweight denoising on the chromaticity component images output by the U channel and the V channel by adopting a bilateral filtering algorithm to obtain a U channel image and a V channel image after bilateral filtering; the three input ends of the channel fusion module are respectively connected with the output end of the high-frequency compensation module and the two output ends of the UV channel processing module, and are used for recombining the high-frequency enhanced Y channel image, the bilateral filtered U channel image and the bilateral filtered V channel image into a complete image, and converting the complete image back to an RGB format to obtain an image frame in the RGB format after denoising; One input end of the output module is connected with the output end of the channel fusion module, and the other input end of the output module is used for receiving the original video frame rate so as to splice the denoised RGB format image frames into a complete denoised video according to the original video frame rate.
- 2. The liquid rocket engine simulated test video noise reduction system according to claim 1, wherein: The high-frequency compensation module calculates a high-frequency enhanced Y-channel image by: Y_enh=clip(Y_blur+λ×ΔY,0,255); Wherein Y_enh is a high-frequency enhanced Y channel image, Y_blast is a Gaussian blur image, deltaY is a high-frequency component, lambda is a high-frequency enhancement gain, clip (.0, 255) is a clipping function for ensuring that the pixel value is within a range of [0,255 ].
- 3. The liquid rocket engine simulated test video noise reduction system according to claim 2, wherein: The UV channel processing module calculates a U channel image after bilateral filtering through the following steps: ; Wherein, the For the double-sided filtered U-channel image, W is a normalization constant, For the current pixel it is possible to select, For a neighborhood of pixels, For the chrominance component image of the current pixel output by the U-channel, A chrominance component image of a neighborhood pixel output for the U channel, sigma is a spatial kernel, Is a range kernel; And the calculation method of the bilateral filtered V-channel image and the bilateral filtered U-channel image in the UV channel processing module is the same.
- 4. The liquid rocket engine simulated test video noise reduction system according to claim 1, wherein: the color space conversion module converts an input image frame in an original RGB format into a YUV format through linear matrix operation, and the expression is as follows: ; wherein Y is a luminance component, U is a blue chrominance component, V is a red chrominance component, R, G, B are respectively red, green and blue primary color components.
- 5. The liquid rocket engine simulated test video noise reduction system according to claim 4, wherein: The pre-denoising module performs pre-denoising processing on the brightness component image, and meanwhile, the key details are primarily reserved by utilizing the similar patch weighted average characteristic, so that a pre-denoised Y-channel image is obtained.
- 6. The liquid rocket engine simulated test video noise reduction system according to claim 5, wherein: The edge weight calculation module calculates edge weights based on the Laplace operator and the exponential decay function; the local variance calculation module calculates a local variance based on box filtering of a local variance calculation window.
- 7. A liquid rocket engine simulated test video noise reduction method based on the liquid rocket engine simulated test video noise reduction system as claimed in any one of claims 1-6, which is characterized by comprising the following steps: step 1, an input module reads an original frame sequence of a simulated test video of a liquid rocket engine, outputs image frames in an original RGB format frame by frame, and transmits the image frames to a color space conversion module; step 2, the color space conversion module converts the input image frame in the original RGB format into a YUV format, and outputs a brightness component image, a blue chrominance component image and a red chrominance component image through a Y channel, a U channel and a V channel respectively; Step 3, the UV channel processing module respectively performs lightweight denoising on the blue chrominance component image output by the U channel and the red chrominance component image output by the V channel by adopting a bilateral filtering algorithm to obtain a U channel image and a V channel image after bilateral filtering; Meanwhile, the luminance component image output by the Y channel is subjected to self-adaptive Kalman filtering by the Y channel processing unit, and the method specifically comprises the following steps: The first step, the pre-denoising module executes pre-denoising on the brightness component output by the Y channel to obtain a pre-denoised Y channel image, the edge weight calculation module detects the edge area of the pre-denoised Y channel image and calculates to obtain edge weight Meanwhile, a local variance calculation module estimates the local noise level of each pixel in the Y channel image after pre-denoising, calculates and obtains local variance sigma 2 (x), and a airspace self-adaptive Q_map construction module combines edge weights And local variance sigma 2 (x), constructing a spatial domain self-adaptive process noise covariance Q_map at the pixel level; secondly, adding the current frame brightness component image output by the Y channel into a frame buffer area by a multi-frame weighted memory prediction model, and predicting based on weighted average of multi-frame images in the frame buffer area to obtain a pixel-level prediction value without noise residues ; Third, the improved Kalman filter adapts the process noise covariance Q_map and predicted values based on spatial domain Calculating Kalman gain pixel by pixel to fuse predicted values The Y channel image after Kalman filtering is obtained with the Y channel image after pre-denoising; Fourthly, the high-frequency compensation module receives the Y-channel image and the local variance sigma 2 (x) after Kalman filtering, repairs the smoothed high-frequency details in the Kalman filtering process, simultaneously avoids enhancing high frequency in an edge area and a noise intensive area, and outputs the Y-channel image after high-frequency enhancement; Step 4, the channel fusion module re-merges the Y channel image after high-frequency enhancement, the U channel image after bilateral filtering and the V channel image into a complete image, and converts the complete image back to an RGB format to obtain an image frame in the RGB format after denoising; and 5, splicing the image frames in the RGB format after denoising into a complete denoising video according to the original video frame rate by the output module.
Description
Liquid rocket engine simulated test video noise reduction system and noise reduction method Technical Field The invention relates to a video noise reduction system and a noise reduction method, in particular to a liquid rocket engine simulation test run video noise reduction system and a noise reduction method. Background The carrier rocket is a basic stone for space economic activities, and the performance and the technical level of the liquid rocket engine are directly determined by the performance of the carrier rocket as a core subsystem of the carrier rocket. In the running and running process of the liquid rocket engine, faults are often represented by abnormal flames (including extreme forms such as explosion), liquid leakage, sensor falling and the like, and can be observed by naked eyes, so that the possibility is provided for deep learning based on machine vision. Meanwhile, to ensure reliability and stability, the video monitoring of the liquid rocket engine needs to use a high-speed camera, which shoots at thousands of frames per second, resulting in extremely redundant video data. At present, the existing image noise reduction methods generally comprise a traditional filtering method, a classical Kalman filtering method and an image noise reduction method based on deep learning, but the methods have different defects under the simulated test video noise reduction scene of the liquid rocket engine. Although the traditional filtering method can inhibit noise to a certain extent, the computational complexity of non-local mean value noise reduction is high, the time domain mean noise reduction is easy to cause motion blur and tailing phenomenon, and the two are difficult to adapt to complex degradation models of local overexposure, high-energy particle tracks, roller shutter-shock wave coupling strips and transient thermal wave distortion in test run videos. Classical Kalman filtering methods rely on single-frame prediction, have low accuracy in complex video motion, are easy to generate ghost, do not consider airspace self-adaptive adjustment, and have insufficient protection for edges and details. The image noise reduction method based on deep learning, such as a noise reduction model based on a convolutional neural network, can recover part of details, but has the advantages of complex model, more parameters, high calculation cost, limited generalization and poor suitability for real-time processing requirements of test run videos. Still other noise reduction methods, such as wavelet transform methods, are sensitive to threshold selection, and median filtering methods are difficult to determine pixels under high noise density, so that the requirements of real-time performance, detail maintenance and noise suppression balance of the liquid rocket engine simulation test video noise reduction cannot be met at the same time. In summary, the existing image denoising method generally has three problems of insufficient real-time performance (high computational complexity), limited adaptability (incapability of adapting to a complex degradation model) and unbalanced denoising and detail retention (easy generation of blurring and tailing), so that the engineering actual requirements of liquid rocket engine simulation test video denoising are difficult to meet, and an image denoising method which has high-efficiency denoising, detail retention and real-time performance is needed. Disclosure of Invention The invention aims to solve the technical problem that the existing image noise reduction method is difficult to meet the actual engineering requirement of liquid rocket engine simulated test video noise reduction, and provides a liquid rocket engine simulated test video noise reduction system and a noise reduction method. In order to achieve the above purpose, the technical solution provided by the present invention is: the liquid rocket engine simulated test video noise reduction system is characterized by comprising an input module, a color space conversion module, a Y-channel processing unit, a UV channel processing module, a channel fusion module and an output module; The input end of the input module is used for receiving an original frame sequence of the simulated test video of the liquid rocket engine and outputting an image frame in an original RGB format frame by frame; The input end of the color space conversion module is connected with the output end of the input module and is used for converting an image frame in an original RGB format into a YUV format and dividing the YUV format into three channels for output, wherein a Y channel outputs a brightness component image, a U channel outputs a blue chrominance component image and a V channel outputs a red chrominance component image; the Y channel processing unit comprises a pre-denoising module, an edge weight calculation module, a local variance calculation module, a airspace self-adaptive Q_map construction module, a multi-frame weighted memor