CN-121971029-A - Retinopathy detection method and device based on mobile equipment
Abstract
The application provides a mobile equipment-based retinopathy detection method and a mobile equipment-based retinopathy detection device, wherein the mobile equipment-based retinopathy detection method comprises the steps of acquiring an initial retina image of a target user based on mobile equipment, preprocessing the initial retina image to obtain a target retina image, processing the target retina image based on a pre-trained neural network model to obtain an initial detection result, calling a historical detection result of the target user, generating diagnosis data based on the historical detection result and the initial detection result, sending the diagnosis data to a database associated with the target user, and displaying the diagnosis data on the mobile equipment. A robust, economical and scalable retinopathy detection and monitoring solution is provided that helps patients and clinicians to achieve early diagnosis and continuous management of eye disease, enabling self-screening of patients in areas of scarce resources without specialized equipment.
Inventors
- Khan a.d.
- LIN DING
- Peng Manqiang
Assignees
- 爱尔眼科医院集团股份有限公司长沙爱尔眼科医院
Dates
- Publication Date
- 20260505
- Application Date
- 20260122
Claims (10)
- 1. A mobile device-based retinopathy detection method, comprising: Acquiring an initial retina image of a target user based on mobile equipment, and preprocessing the initial retina image to obtain a target retina image; Processing the target retina image based on a pre-trained neural network model to obtain an initial detection result; Invoking a historical detection result of the target user, and generating diagnosis data based on the historical detection result and the initial detection result; and sending the diagnosis data to a database associated with the target user, and displaying the diagnosis data on the mobile device.
- 2. The method of claim 1, wherein the mobile device-based acquisition of the initial retinal image of the target user comprises: analyzing the collected image data based on a guide program preset on the mobile equipment, and determining the relative position relationship between the target user and the mobile equipment and the current environment data; determining guidance data based on the relative positional relationship and the environmental data; And guiding the target user to acquire an image through the mobile equipment based on the guiding data to obtain the initial retina image.
- 3. The method of claim 2, wherein the determining guidance data based on the relative positional relationship and the environmental data comprises: invoking a reference retinal image and/or a universal retinal image of the target user; generating a reference relative positional relationship and reference environmental data based on the reference retinal image and/or the generic retinal image; Generating first guide sub-data according to the relative position relation and the reference relative position relation, and generating second guide sub-data according to the environment data and the reference environment data; and integrating the first guide sub-data and the second guide sub-data to obtain the guide data.
- 4. The method of claim 1, wherein the preprocessing the initial retinal image to obtain a target retinal image comprises: based on a preset calibration program, adjusting the light rays, the intersection points and the angles of the initial retina image to obtain a retina adjustment image; and carrying out standardization processing on the retina adjustment image to obtain the target retina image, wherein the standardization processing comprises size adjustment, contrast adjustment and background correction.
- 5. The method of claim 1, wherein the neural network model is initialized based on InceptionV architecture using EYENET weights, wherein the EYENET weights are pre-trained on a Di-EYENET dataset, and wherein the neural network model integrates chained foraging and cyclone aging strategies through dolphin fish foraging optimization, wherein the chained foraging coordinately optimizes multiple candidate solutions in a coordinated manner, wherein the cyclone aging enhances the global exploration ability of the neural network model through simulation of a spiral search pattern, and wherein network hyper-parameters of the neural network model including learning rate, batch size, image rotation range, and training round number are ultimately dynamically adjusted.
- 6. The method of claim 1, wherein the neural network model is initialized based on InceptionV architecture using EYENET weights, wherein the EYENET weights are pre-trained on a Di-EYENET dataset, and wherein the neural network model adjusts network super parameters via a particle swarm optimization algorithm.
- 7. The method of claim 1, wherein the invoking the historical test results of the target user and generating diagnostic data based on the historical test results and the initial test results comprises: invoking a history detection result of the target user, and generating a reference image based on the history detection result; Determining retinopathy trend information through the reference image and the initial detection result, and constructing a pathological change thermodynamic diagram; determining retinopathy progress information based on the detection time of the initial detection result; And integrating the retinopathy trend information, the pathological change thermodynamic diagram and the retinopathy progress information to obtain the diagnosis data.
- 8. A mobile device-based retinopathy detection apparatus, comprising: The acquisition module is configured to acquire an initial retina image of a target user based on the mobile equipment, and preprocess the initial retina image to obtain a target retina image; The detection module is configured to process the target retina image based on a pre-trained neural network model to obtain an initial detection result; The diagnosis module is configured to call a historical detection result of the target user and generate diagnosis data based on the historical detection result and the initial detection result; and a feedback module configured to send the diagnostic data to a database associated with the target user and display the diagnostic data on the mobile device.
- 9. A computing device, comprising: A memory and a processor; the memory is for storing computer executable instructions for executing the steps of the mobile device-based retinopathy detection method of any one of claims 1 to 7.
- 10. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the mobile device-based retinopathy detection method of any one of claims 1 to 7.
Description
Retinopathy detection method and device based on mobile equipment Technical Field The application relates to the technical field of computers, in particular to a retinopathy detection method based on mobile equipment. The application also relates to a mobile device-based retinopathy detection device, a computing device and a computer readable storage medium. Background Retinopathy is the leading cause of preventable vision loss. Early detection and continuous monitoring are critical to slowing down disease progression and timely intervention. Conventional screening typically requires specialized retinal cameras and trained graders, which limit usability in primary care and resource-limited environments. Thus, many patients are screened late or infrequently, increasing the risk of irreversible injury. Artificial intelligence and deep learning have shown great potential in the field of ocular image analysis and retinopathy screening. However, the current mainstream model is generally initialized by using general weights and optimized by standardized procedures, which do not fully consider the special features of the ophthalmic field nor adapt to the hardware limitations of the mobile device. Such limitations may lead to reduced sensitivity of detection of microscopic vascular changes, increasing the risk of overfitting to non-clinical artifacts, while existing mobile screening methods tend to focus on one-time classification rather than long-term monitoring. Patients often lack convenient tools to establish individual baselines, track conditions over time, and receive reminders of possible exacerbations from clinical follow-up visits. Disclosure of Invention In view of the above, the embodiment of the application provides a mobile device-based retinopathy detection method to solve the technical defects in the prior art. The embodiment of the application also provides a retinopathy detection device based on the mobile device, a computing device and a computer readable storage medium. According to a first aspect of an embodiment of the present application, there is provided a mobile device-based retinopathy detection method, including: Acquiring an initial retina image of a target user based on mobile equipment, and preprocessing the initial retina image to obtain a target retina image; Processing the target retina image based on a pre-trained neural network model to obtain an initial detection result; Invoking a historical detection result of the target user, and generating diagnosis data based on the historical detection result and the initial detection result; and sending the diagnosis data to a database associated with the target user, and displaying the diagnosis data on the mobile device. Optionally, the acquiring, based on the mobile device, an initial retinal image of the target user includes: analyzing the collected image data based on a guide program preset on the mobile equipment, and determining the relative position relationship between the target user and the mobile equipment and the current environment data; determining guidance data based on the relative positional relationship and the environmental data; And guiding the target user to acquire an image through the mobile equipment based on the guiding data to obtain the initial retina image. Optionally, the determining guidance data based on the relative positional relationship and the environmental data includes: invoking a reference retinal image and/or a universal retinal image of the target user; generating a reference relative positional relationship and reference environmental data based on the reference retinal image and/or the generic retinal image; Generating first guide sub-data according to the relative position relation and the reference relative position relation, and generating second guide sub-data according to the environment data and the reference environment data; and integrating the first guide sub-data and the second guide sub-data to obtain the guide data. Optionally, the preprocessing the initial retinal image to obtain a target retinal image includes: based on a preset calibration program, adjusting the light rays, the intersection points and the angles of the initial retina image to obtain a retina adjustment image; and carrying out standardization processing on the retina adjustment image to obtain the target retina image, wherein the standardization processing comprises size adjustment, contrast adjustment and background correction. Optionally, the neural network model is initialized based on InceptionV architecture by using EYENET weights, wherein the EYENET weights are trained on a Di-EYENET data set in advance, and the neural network model integrates chained foraging and cyclone aging strategies through dolphin foraging optimization, the chained foraging coordinately optimizes a plurality of candidate solutions in a cooperative manner, the cyclone aging enhances global exploration capacity of the neural network model