Search

CN-121981906-A - Hardware implementation method of time-space domain filtering system combining image information

CN121981906ACN 121981906 ACN121981906 ACN 121981906ACN-121981906-A

Abstract

A hardware implementation method of a time-space domain filtering system combining image information comprises the steps of preprocessing images by means of downsampling layering and upsampling phase differences, carrying out pipeline form processing on data streams of each layer of original image preprocessing image to obtain four-direction edge filtering image video stream data, obtaining multi-frame image cache data through read-write processing of each layer of filtering image video stream data, carrying out motion compensation denoising on each layer of filtering image in a time domain, carrying out bilinear interpolation formula convolution kernel on the motion compensated denoised data video stream in a pipeline form to obtain image data restored to original resolution and fusing the video stream data of each layer, and carrying out pipeline form processing on the fused image data to obtain image video stream data processed in a space domain. The invention can keep the detail information in the image as much as possible while inhibiting the image noise, and effectively inhibit the tailing phenomenon of the moving object in the image scene under the condition of the moving scene.

Inventors

  • ZHANG TAO
  • HU QIYUAN
  • LIU TINGHAO
  • XU XINRUI
  • KONG XIANGHAO
  • ZHANG JUN
  • TANG HAITAO
  • ZHAO WEN
  • CAO RUI
  • JIANG YU
  • LI XIAOJUAN
  • LIU BIN

Assignees

  • 中国空间技术研究院

Dates

Publication Date
20260505
Application Date
20251212

Claims (7)

  1. 1. A hardware implementation method of a time-space domain filtering system combined with image information is characterized by comprising the following steps: step 1, adopting image downsampling layering and upsampling phase difference for each frame of image to obtain multi-layer images with different resolutions, and obtaining an original image preprocessing image; step 2, checking the data flow of each layer of original image preprocessing diagram by utilizing edge filtering convolution to perform pipeline form processing to obtain four-direction edge filtering image video flow data; Step 3, obtaining multi-frame image cache data by utilizing a high-speed interaction channel between the FPGA and the DDR3 and performing read-write processing on the video stream data of each layer of filtered image; Step 4, utilizing a high-speed interaction channel between the FPGA and the DDR3 and a denoising formula between the two frames, combining a differential filtering image of the current frame and a result after motion compensation of the previous frame, and performing motion compensation denoising on each layer of filtered image in a time domain; Step 5, the data video stream after the motion compensation denoising is subjected to bilinear interpolation formula convolution kernel in a pipeline form, so that image data restored to the original resolution is obtained, and the video stream data of each layer are fused; step 6, processing the fused image data in a pipeline form by utilizing the edge filtering convolution check in the step 2 to obtain image video stream data after spatial domain processing; And 7, outputting the clock parameters and the image resolution of the image video stream data processed by the spatial domain to a screen through an HDMI interface.
  2. 2. The method for realizing the time-space domain filtering system combining the image information according to claim 1, wherein the step 1 is characterized in that each frame of image adopts image downsampling layering and upsampling phase difference to obtain a plurality of layers of images with different resolutions, and an original image preprocessing diagram is obtained, and specifically comprises the following steps: Step 1-1, simultaneously performing image downsampling for 6 times with different multiples for each frame of image, wherein the downsampling is respectively 2 times, 4 times, 8 times, 16 times, 32 times and 64 times, each layer of downsampling is performed with frequency division to perform multiple frequency reduction on an original image input clock, 6 groups of clocks after frequency division are used as control clocks for downsampling, sampling of a video stream is completed according to rising edge triggering of the control clocks, and the whole flow of data downsampling is completed; Step 1-2, the downsampled image data enter a 2x2 upsampling window constructed by using a shiftram Generator IP kernel in vivado, the upsampling algorithm processing is triggered according to the rising edge of a clock, the downsampled image data flow realizes 2 times bilinear interpolation upsampling processing in the 2x2 upsampling window, the output data enter a fifo Generator IP kernel for buffering, the temporary 6-layer data are output at the same time for differentiation at 64 times of the sampled data, and two images with different blurring degrees in the same scale space are subtracted to obtain differential diagrams D1, D2, D3, D4, D5 and D6 with different resolutions, wherein the differential diagrams are specifically expressed as: In the formula, Represent the first The layer differential pyramid downsampled image, Represent the first The layer pyramid downsampled image is then processed, Represent the first The layer scale restores the image.
  3. 3. The method for realizing the hardware of the time-space domain filtering system combined with the image information, as set forth in claim 1, wherein the step 2 is characterized in that the data stream of each layer of original image preprocessing diagram is subjected to pipeline form processing by utilizing edge filtering convolution check to obtain four-direction edge filtering image video stream data, and specifically comprises the following steps: Step 2-1, constructing a 16×16 filter algorithm window by using shiftram Generator IP cores in Vivado, wherein the buffer quantity set by each IP core is the line pixel quantity of the current layer image, and the original image preprocessing chart video stream data is used as input data; Step 2-2, calculating the time required by processing the image video stream in a pipeline form, constructing an image video popular field signal buffer area by using shiftram Generator IP cores in vivado, and storing the video popular field signal in the buffer area; and 2-3, enabling the video stream to enter a 16 x 16 filter algorithm window, calculating in the convolution window according to the four-direction edge filter operator, performing the calculation flow in the FPGA in a pipeline mode, and outputting a filtered video data stream after the calculation is completed.
  4. 4. The method for realizing hardware of time-space domain filtering system combined with image information according to claim 1, wherein step 3 is characterized in that a high-speed interaction channel between FPGA and DDR3 is utilized, and multi-frame image cache data is obtained through read-write processing of each layer of filtered image video stream data, specifically: Step 3-1, setting an FDMA_control module and FDMA in the FPGA, wherein the FDMA_control module is provided with a FIFO, the FDMA_control module is used for controlling interface time sequence of DDR3, writing FIFO data in the FPGA into DDR3 or reading data in DDR3 into the FPGA, the FDMA is used for managing the FIFO and detecting the storage state of the FIFO, and when the FIFO meets the condition of writing DDR3, the FDMA actively initiates a writing request; step 3-2, in the initial state, the video stream data line signal, the field signal and the video stream data first enter the FIFO, and when the FIFO is full of a burst brust, a write request state is triggered; The FDMA_control module sends a write DDR request signal areq to the FDMA and waits for a FDMA write-back enabling signal wr_en to complete handshake response; In a writing state, a DDR (double data rate) request signal areq and a write enable signal wr_en are valid at the same time, and image data is read from a FIFO in the FDMA control module and then written into a DDR3 memory through an AXI4 bus; Step 3-2, when the initial state is an idle state, indicating that no image data data_img is written in the FIFO, and when the valid signal rises, indicating that the image data in the FIFO is cached and simultaneously enters a reading request state; Setting a write enabling signal wren=valid & & ready in a read request state, indicating that in the read request state, the image data of the FIFO buffer is written into the DDR3 memory through the AXI bus, and returning to a read initial state when the transmission of the last image data of a group of FIFO buffers is completed, and stopping the transmission of the data stream.
  5. 5. The method for realizing hardware of time-space domain filtering system combining image information according to claim 4, wherein step 4 uses a high-speed interaction channel between FPGA and DDR3 and a denoising formula between two frames, combines a differential filtering image of a current frame and a result after motion compensation of a previous frame, and performs motion compensation denoising on each layer of filtered image in a time domain, specifically comprises: Step 4-1, the direction parameters of the differential filtering image of the current frame processed in the step 2 enter a FIFO for buffering, a reading enabling end of the FIFO is effectively enabled to control the line in the video stream data after the motion compensation of the previous frame obtained after the DDR3 is read by the FPGA in the step 3-2, and after the reading enabling signal is changed into a high level, the differential filtering image is read out from the FIFO end and is processed with the parameters of the current frame simultaneously with the video stream data after the motion compensation of the previous frame; And 4-2, processing parameters of the current frame, namely describing a denoising formula by using verilog language, and performing time domain motion compensation denoising on the two video stream data in the step 4-1 according to the denoising formula, wherein the denoising formula is as follows: Wherein, the Representing the result of the i-th layer DOG motion compensation denoising of the nth frame image, Represent the first The layer differential pyramid downsamples the filtered image data, Representing the normalized parameters of the motion signal, And normalizing the parameters by the differential signals.
  6. 6. The hardware implementation method of a time-space domain filtering system combined with image information according to claim 5, wherein the step 5 is characterized in that the motion compensated denoised data video stream is convolved with a bilinear interpolation formula in a pipeline form to obtain image data restored to an original resolution and the video stream data of each layer are fused, and the method specifically comprises the following steps: step 5-1, constructing 5 up-sampling windows of 2 x 2 by utilizing Shift ram, wherein the depth of each window is a row effective pixel value of the current layer image resolution, and the windows correspond to 6 layers of video stream image data and are used for subsequent up-sampling processing; Step 5-2, calculating the time required by processing each layer of image video stream, constructing a cache region of image video popular field signals by using ShiftRam Generator IP cores in vivado, and storing the video popular field signals into the cache region; step 5-3, constructing a2 x 2 up-sampling window before each layer of video stream enters the buffer area, and finishing up-sampling calculation in the window by utilizing a bilinear interpolation algorithm; And 5-4, restoring the video data stream after filtering of each layer to the original resolution, entering the FIFO for data caching, writing line effective signals for enabling the video data stream to be read, and enabling the video data stream to be read, wherein when the line effective signals for enabling the video data of the lowest layer of the current frame to be read, restoring the original video resolution, are pulled up, the rest 5 layers of data are output from the FIFO together, and 6 layers of data are added simultaneously, so that the fused video data stream is obtained.
  7. 7. The method of claim 6, wherein the step 7 is performed by using clock parameters and image resolution of the image video stream data processed in the spatial domain, and the clock parameters and the image resolution are output to a screen through an HDMI interface, and the method is specifically as follows: step 7-1, constructing and outputting interface line field signals according to the image video stream data resolution ratio after spatial domain processing, and outputting six states respectively: S0, waiting for a new output task in an idle state; S1, outputting FIFO data state; S2, a counting judgment state, namely judging whether a counter reaches line counting, if so, directly switching to S5, and otherwise switching to S3, wherein the counter is a logic counting module designed for controlling HDMI line output time sequence in the FPGA and is used for judging whether one line of image data is output completely; S3, a line blanking state, namely, carrying out line blanking period timing in one frame of image according to a timer, checking whether the data quantity read in the FIFO meets the data quantity of more than one line after the line blanking period timing is finished, and if not, turning to the S4 state; S4, a waiting state, wherein when the read data quantity in the FIFO meets the data quantity of more than one line, the state is switched to S1, and the output data is ready to be started; S5, ending state, wherein the next clock rising edge enables the data output to enter the S0 state again, and the next output task is ready to be processed; in the S0 state, realizing a data stream caching function, and entering a state S1 when the cache data count in the FIFO is more than or equal to COL, wherein COL represents a row of pixel quantity; In the S1 and S2 states, new format data is output, and after each row of data is output, the state of S3 is entered; When the blanking period is over and the data quantity of more than one line is not cached in the FIFO, the system enters into the S4 state and waits for the FIFO to cache the image data; After the ROW data is read out, the format conversion of one frame of image data with the resolution of COL×ROW is completed, and the state S5 is entered; and 7-2, setting relevant clock parameters according to an HMDI interface protocol, and connecting the output interface line fields according to the video stream output line field signals in the step 7-1 to finish the process of outputting the video from the inside of the FPGA to the display screen.

Description

Hardware implementation method of time-space domain filtering system combining image information Technical Field The invention relates to a hardware implementation method of a time-space domain filtering system combining image information, belonging to the technical field of image denoising and enhancement. Background Noise is a significant cause of image disturbance. For the denoising method, the space domain denoising method and the time domain denoising method have certain denoising effects on the noise of the image. However, the current denoising method is realized on different platforms, has certain time delay and basically delays line signals from several lines to several frames. Specifically, the method can be described as the following aspects: (1) Based on a computer platform denoising algorithm, the platform mainly uses a CPU to run the denoising algorithm to process images, and compared with other processing platforms, the computer platform has excellent denoising effect of processing the denoising algorithm, but the denoising time is counted in seconds. (2) Denoising counting based on the embedded platform, wherein the denoising algorithm processes the frame counting. Currently mainstream platforms include ARM platforms and DSP platforms. The DSP platform has the internal characteristics that the floating point number calculation speed is faster than that of other platforms, and the ARM platform is more biased to control the input and output of the algorithm data flow. (3) The method based on the GPU platform mainly adopts a method based on a deep learning network to carry out image denoising, and compared with the method based on filtering, a model and a traditional learning method, the method has the advantage of obtaining a more promising denoising result. GPU platforms, such as the Injeida A100 platform, are currently one of the popular platforms. However, the problems of the prior art platform can be summarized as excessive time delay or substantially real-time processing, but the algorithm processing effect is not as good as that of the computer platform. Disclosure of Invention The invention solves the technical problems of overcoming the defects of the prior art, providing a hardware implementation method of a time-space domain filtering system combined with image information, realizing that detail information, target contours and the like in an image can be reserved as much as possible while image noise is restrained, and effectively restraining the tailing phenomenon caused by a moving target in an image scene under the condition of larger moving scene or scene change. The technical scheme of the invention is as follows: a hardware implementation method of a time-space domain filtering system combined with image information comprises the following steps: step 1, adopting image downsampling layering and upsampling phase difference for each frame of image to obtain multi-layer images with different resolutions, and obtaining an original image preprocessing image; step 2, checking the data flow of each layer of original image preprocessing diagram by utilizing edge filtering convolution to perform pipeline form processing to obtain four-direction edge filtering image video flow data; Step 3, obtaining multi-frame image cache data by utilizing a high-speed interaction channel between the FPGA and the DDR3 and performing read-write processing on the video stream data of each layer of filtered image; Step 4, utilizing a high-speed interaction channel between the FPGA and the DDR3 and a denoising formula between the two frames, combining a differential filtering image of the current frame and a result after motion compensation of the previous frame, and performing motion compensation denoising on each layer of filtered image in a time domain; Step 5, the data video stream after the motion compensation denoising is subjected to bilinear interpolation formula convolution kernel in a pipeline form, so that image data restored to the original resolution is obtained, and the video stream data of each layer are fused; step 6, processing the fused image data in a pipeline form by utilizing the edge filtering convolution check in the step 2 to obtain image video stream data after spatial domain processing; And 7, outputting the clock parameters and the image resolution of the image video stream data processed by the spatial domain to a screen through an HDMI interface. Further, in the step 1, each frame of image is subjected to image downsampling layering and upsampling phase difference to obtain a multi-layer image under different resolutions, and an original image preprocessing diagram is obtained, which specifically includes: Step 1-1, simultaneously performing image downsampling for 6 times with different multiples for each frame of image, wherein the downsampling is respectively 2 times, 4 times, 8 times, 16 times, 32 times and 64 times, each layer of downsampling is performed with frequency div