Search

CN-122004745-A - Multi-mode fundus imaging system based on FPGA and real-time eye movement compensation method

CN122004745ACN 122004745 ACN122004745 ACN 122004745ACN-122004745-A

Abstract

The invention discloses a multi-mode fundus imaging system based on an FPGA and a real-time eye movement compensation method, and belongs to the field of medical image equipment. The system comprises an optical imaging and tracking module, an FPGA processing control module, a storage module and a system control and display unit. The FPGA module is internally provided with a unified frame-level clock management unit which is used for synchronously dispatching the scanning of the tracking light beam and the imaging light beam, resolving eyeball displacement parameters in real time through a hardware characteristic matching unit, correcting the initial coordinates of the cSLO and the SD-OCT scanning galvanometer according to the parameters in a feedforward way before each imaging frame or scanning block starts to realize frame-level eye movement compensation, and simultaneously, executing preprocessing, displacement-based multi-frame alignment accumulation and image registration and fusion on the cSLO and the SD-OCT data streams in parallel in a hardware pipeline. The invention realizes real-time synchronization, motion artifact suppression and high-precision fusion of multi-mode data by full hardware processing, and remarkably improves imaging quality, system response speed and repeatability of follow-up scanning.

Inventors

  • BAI YANGYANG
  • LIU LEI

Assignees

  • 高视创新科技有限公司

Dates

Publication Date
20260512
Application Date
20260119

Claims (9)

  1. 1. A multi-modality FPGA-based fundus imaging system, comprising: The optical imaging and tracking module comprises a confocal laser scanning fundus imaging unit, a spectral domain optical coherence tomography unit and a scanning module, wherein the confocal laser scanning fundus imaging unit is used for generating and utilizing a first group of scanning galvanometer to scan an imaging beam and an eyeball motion tracking beam which are transmitted in a common way; The FPGA processing control module is internally provided with a unified frame-level clock management unit and is integrated with a parallel processing pipeline constructed by hardware logic resources, and the pipeline comprises an eye movement signal real-time resolving unit, a real-time compensation control unit, a parallel logic unit group and a real-time image registering and automatic re-scanning unit; The storage module is connected with the FPGA processing control module; Wherein, the The frame-level clock management unit is used for generating a unified time sequence datum for a system, distributing a frame identifier with the unified time datum for a reference image frame acquired by the eyeball motion tracking light beam and a data stream acquired by the imaging light beam and the fault scanning light beam, and enabling acquired data of different modes to be related in time through the frame identifier; The eye movement signal real-time resolving unit is used for receiving continuous reference image frames acquired by the eye movement tracking light beam, resolving displacement parameters of an eyeball relative to a reference position in real time through an image feature matching algorithm realized by hardware logic, and storing the displacement parameters and corresponding frame identifications thereof into the storage module; the real-time compensation control unit is used for reading the displacement parameters from the storage module before the scanning period of each imaging frame or scanning block starts, and converting the displacement parameters into initial coordinate correction instructions for the first group of scanning galvanometers and the second group of scanning galvanometers so as to realize feedforward eye movement compensation; The parallel logic unit group is used for receiving acquisition data streams from the imaging light beam and the fault scanning light beam in parallel, calling corresponding displacement parameters from the storage module based on frame identifiers associated with the data streams, performing multi-frame alignment and accumulation processing based on the displacement parameters on the fault scanning light beam acquisition data in a hardware pipeline, and simultaneously performing parallel preprocessing on the imaging light beam acquisition data; the real-time image registration and automatic re-scanning unit is used for registering and fusing the data processed by the parallel logic unit group, outputting a fused image and realizing an automatic re-scanning function in a follow-up scanning mode; the system comprises an eye movement signal real-time resolving unit, a real-time compensation control unit, a parallel logic unit group and a real-time image registration and automatic re-scanning unit, wherein data transmission and state switching between the eye movement signal real-time resolving unit, the real-time compensation control unit, the parallel logic unit group and the real-time image registration and automatic re-scanning unit are driven by a hardware time sequence signal generated by the frame-level clock management unit, so that a full-hardware closed-loop processing and control path is formed.
  2. 2. The FPGA-based multi-modal fundus imaging system of claim 1, wherein the processing of the tomographic beam acquisition data by the parallel logic unit set includes performing a fast fourier transform by hardware logic to reconstruct depth information and performing spatial location correction and signal accumulation of multiple scan reconstruction results from the same anatomical location by a hardware accumulator using the displacement parameters.
  3. 3. The FPGA-based multi-modal fundus imaging system according to claim 1, wherein in the follow-up scan mode, the real-time image registration and automatic re-scan unit performs feature matching on a currently acquired image and a baseline image pre-stored in the storage module by a block mean absolute difference algorithm, and dynamically adjusts a coordinate system of a subsequent scan according to a matching result to realize automatic re-scan.
  4. 4. The FPGA-based multi-modal fundus imaging system of claim 1, wherein said real-time compensation control unit calculates a scan galvanometer drive voltage correction based on said displacement parameter, said correction being injected by a digital-to-analog converter into a scan control signal prior to the start of a next imaging scan frame.
  5. 5. The FPGA-based multi-modality fundus imaging system of claim 1, further comprising a system control and display unit in communication with said FPGA processing control module for issuing a scan protocol and receiving and displaying said fused image.
  6. 6. A method for multi-modality fundus imaging real-time image fusion and eye movement compensation, characterized in that it is applied to the FPGA-based multi-modality fundus imaging system according to any one of claims 1 to 5, said method comprising: Based on a unified time sequence generated by a frame-level clock management unit of the FPGA processing control module, synchronously starting eye movement tracking scanning, confocal laser scanning fundus imaging and spectral domain optical coherence tomography, and distributing frame identifiers for acquired data of all channels; In an eye movement signal real-time resolving unit of the FPGA processing control module, carrying out hardware-level feature matching on an image sequence acquired in real time by tracking scanning, resolving current eyeball displacement parameters, and storing the displacement parameters and corresponding frame identifiers; Before the data acquisition of each imaging frame or scanning block is started, the stored displacement parameters are read through a real-time compensation control unit of an FPGA processing control module, converted into scanning coordinate correction amounts and fed forward to a galvanometer control system for confocal laser scanning fundus imaging and spectral domain optical coherence tomography; in the imaging data acquisition process, the real-time preprocessing of confocal laser scanning fundus imaging data and the multi-frame space alignment and accumulation of spectral domain optical coherence tomography data based on the read displacement parameters are executed in parallel through a parallel logic unit group of an FPGA processing control module; Based on the association of the frame identification, registering and fusing the processed confocal laser scanning fundus imaging image and the spectral domain optical coherence tomography image in a real-time image registering and automatic scanning unit of the FPGA processing control module.
  7. 7. The method for multi-modal fundus imaging real-time image fusion and eye movement compensation according to claim 6, wherein the hardware level feature matching uses a cross-correlation algorithm to calculate a cross-correlation function between a reference image frame acquired in real time and a pre-stored reference image frame to calculate displacement parameters of an eyeball, and the multi-frame spatial alignment and accumulation comprises performing position correction on a plurality of A-scan lines of the same anatomical position by using the displacement parameters and then accumulating and averaging in real time in a hardware accumulator.
  8. 8. The method for multi-modal fundus imaging real-time image fusion and eye movement compensation according to claim 6, wherein in the follow-up scanning mode, the method further comprises the steps of performing feature matching on a currently acquired image and a pre-stored baseline image through a block average absolute difference algorithm, and adjusting global coordinates of subsequent scanning according to a matching result so as to realize automatic rescanning.
  9. 9. The method for multi-modal fundus imaging real-time image fusion and eye movement compensation according to claim 6, wherein the steps of eye movement signal real-time calculation, real-time compensation control, parallel logic processing and image registration and re-scanning are driven by hardware timing signals inside the FPGA and are performed in a closed loop in a unified hardware pipeline.

Description

Multi-mode fundus imaging system based on FPGA and real-time eye movement compensation method Technical Field The invention belongs to the field of medical imaging equipment, and particularly relates to a multi-mode fundus imaging system based on an FPGA and a real-time eye movement compensation method. Background The multi-mode fundus imaging technology is an important tool for modern ophthalmic diagnosis, and provides more comprehensive information for screening, diagnosis and follow-up of retinal and choroidal diseases by integrating the advantages of different imaging modes. Among them, confocal laser scanning fundus imaging (cSLO) can provide a high-contrast two-dimensional image of the retinal surface, while spectral domain optical coherence tomography (SD-OCT) can acquire three-dimensional high-resolution tomographic structures of the retinal layers. The wide-area high-speed imaging capability of the cSLO is combined with the depth resolution capability of the SD-OCT, so that the morphological and functional changes of glaucoma, age-related macular degeneration, diabetic retinopathy and other diseases can be comprehensively evaluated. However, high quality multi-modality fusion imaging faces a number of technical challenges, the core difficulty of which stems from physiological movement of the eyeball. Even well-fitting patients, there are unavoidable micro tremors, drift and micro saccades. These movements occur on a frame-level or scan-block-level time scale, which can cause problems of (1) scan offset and motion artifacts, in which in a single scan (especially a three-dimensional OCT volume scan that is time-consuming, and is long), eye movement can cause scan line misalignment or distortion, creating artifacts such as smearing, breakage, or repetition, and significantly reducing image quality, (2) multi-modality image registration errors, i.e., even if cSLO and OCT images are acquired synchronously, due to lack of a uniform motion compensation reference, spatial misalignment exists in subsequent software registration, affecting the accuracy and diagnostic accuracy of image fusion. (3) The follow-up scanning consistency is insufficient, in long-term disease monitoring, the scanning image alignment accuracy at different time points directly influences quantitative evaluation of focus change, and the traditional manual or software post-processing registration method is low in efficiency and limited in repeatability. In order to cope with eye movement interference, the prior art mainly adopts two schemes, namely, one scheme is to carry out post-processing registration and artifact correction through a software algorithm, but has the defects of large processing delay, incapability of guiding imaging in real time and limited capability of correcting large-amplitude movement, and the other scheme is to introduce an active eye movement tracking technology in a hardware level. However, the existing hardware scheme still has optimization space (1) the real-time performance and the certainty are insufficient, the tracking, the compensation and the multi-mode image processing generally depend on a general processor or a scattered hardware module, performance bottlenecks are easy to occur under the conditions of high frame rate and large data flow, (2) the real-time fusion display and the frame-level compensation capability are limited, the existing system cannot realize frame-level feedforward compensation and real-time image fusion in the scanning process, and (3) the system integration level is low, and the functions of motion tracking, frame-level image denoising, multi-frame accumulation average, automatic follow-up alignment (AutoRescan) and the like cannot be deeply integrated and accelerated in parallel at the hardware level, so that the power consumption, the complexity and the cost are increased. The above-mentioned drawbacks stem from the fact that the processing architecture of the existing systems is serial in nature, and that unavoidable delays exist in the acquisition, processing, compensation command generation and feedback links of the eye movement signals, thus creating a serious "feedback hysteresis effect". Therefore, a novel technical scheme is urgently needed, and frame-level multi-mode data synchronization, eye movement compensation and image real-time processing can be realized on bottom hardware such as an FPGA (field programmable gate array) and the like, so that the real-time performance, stability, fusion precision and clinical application efficiency of multi-mode fundus imaging are comprehensively improved. Disclosure of Invention [ Problem ] The technical problem to be solved by the invention is to overcome the feedback hysteresis effect caused by the serial processing architecture of the existing multi-mode fundus imaging system, and is characterized in that 1) eye movement compensation cannot be completed before the next imaging scanning frame or scanning block starts, 2) deter