KR-20260063119-A - The method for improving object detection performance and evaluation in low-resolution smart glasses
Abstract
The present invention provides a method for improving and evaluating the object detection performance of low-resolution smart glasses by applying an anti-aliasing filter and super-resolution technology to improve low-resolution images and enable more accurate detection and evaluation; a method for improving and evaluating the object detection performance of low-resolution smart glasses by combining a YOLO-tiny model and eye tracking technology to optimize the computational resources of the smart glasses and efficiently perform object detection in a region of interest (ROI) to secure real-time performance; and a method for improving and evaluating the object detection performance of low-resolution smart glasses by linking with a cloud server to post-process detected data on the cloud server and update training data, thereby progressively improving the object detection performance of the YOLO model and increasing accuracy.
Inventors
- 박진홍
- 주현우
Assignees
- 주식회사 딥파인
Dates
- Publication Date
- 20260507
- Application Date
- 20241030
Claims (8)
- In performing the task of detecting and evaluating objects using smart glasses, A step of detecting an object image including attributes of size, color, and shape using the camera of the smart glasses; A step of applying an anti-aliasing filter and super-resolution to a region of interest (ROI) set for the object image; Step of recognizing and classifying objects using the YOLO-tiny model; A step of reviewing recognized and classified object data to evaluate them as normal or defective products; A step of displaying the evaluated object data to a user through the display of the smart glasses; and A step of filtering and compressing the evaluated object data and uploading it to a cloud server in real time; A method for improving and evaluating object detection performance of low-resolution smart glasses, comprising the following features.
- In claim 1, The step of applying an anti-aliasing filter and super-resolution to a region of interest (ROI) set for the object image above is: Temporal Anti-Aliasing is applied to correct motion blur, and A method for improving and evaluating object detection performance of low-resolution smart glasses, characterized by applying an FXAA filter to increase the accuracy of object detection.
- In claim 1, The step of recognizing and classifying objects using the YOLO-tiny model based on the object image above is, A step of transmitting the object data to a cloud server and re-performing object detection and evaluation using a YOLO model in the event of an object recognition error or inability to recognize an object; A method for improving and evaluating object detection performance of low-resolution smart glasses, characterized by including
- In claim 3, The step of transmitting the object data to a cloud server and re-performing object detection and evaluation using a YOLO model in the event of an object recognition error or inability to recognize an object is: A method for improving and evaluating object detection performance of low-resolution smart glasses, characterized by transmitting data of the above object to a cloud server and performing additional analysis and post-processing using a high-performance YOLO model or other artificial intelligence model.
- In claim 1, The step of recognizing and classifying objects using the YOLO-tiny model is, Enhance object boundary learning by augmenting low-resolution data and integrating filters within the YOLO network structure, but Optimizing object detection in the region of interest (ROI) by combining the above YOLO-tiny model with eye tracking technology, and A method for improving and evaluating object detection performance of low-resolution smart glasses, characterized by applying lightweighting and quantization to the computational resources of the smart glasses.
- In claim 1, The step of reviewing the recognized and classified object data to evaluate them as normal and defective products is: A step of evaluating surface damage and cracks according to the set criteria of the above object, and evaluating the object as a defective product if determined to be a defect; A step of simultaneously performing quantitative and qualitative evaluations of the above object and determining the state of the object by synthesizing the evaluated results; and A step of predicting changes in the state of the object over time through accumulated data of the cloud server and estimating the shelf life or the possibility of quality degradation; A method for improving and evaluating object detection performance of low-resolution smart glasses, characterized by including
- In claim 1, The step of filtering and compressing the evaluated object data and uploading it to a cloud server in real time is A method for improving and evaluating object detection performance of low-resolution smart glasses, characterized by post-processing the object data on the cloud server and storing it as basic training data, thereby improving the object detection performance of the YOLO model.
- In claim 1, The step of displaying the evaluated object data to the user through the display of the smart glasses is: A step of providing an error alarm through the display of the smart glasses or other notification means when an error occurs when the user classifies the object as a normal product or a defective product; A method for improving and evaluating object detection performance of low-resolution smart glasses, characterized by further including
Description
Method for improving object detection performance and evaluation in low-resolution smart glasses The present invention relates to an object detection and evaluation technology using low-resolution smart glasses. In particular, the present invention relates to a technology that improves performance through learning and post-processes detected data by linking the smart glasses with a cloud-based server, using various deep learning algorithms and filtering techniques such as the YOLO-tiny model, eye tracking technology, and anti-aliasing filters to overcome the limited computational resources and low-resolution image processing capabilities of smart glasses. Smart glasses are being utilized for an increasingly diverse range of purposes in industrial settings. In particular, they are establishing themselves as important devices that enhance worker efficiency and provide real-time information across various sectors, including logistics, manufacturing, and distribution. For instance, in the logistics industry, workers can visually receive necessary information without using their hands, significantly improving work efficiency. In manufacturing, smart glasses are used for quality inspection, machine maintenance, and training; by providing workers with real-time information and instructions needed on-site, they help reduce work errors and enhance safety. In the distribution industry as well, they contribute to improving the quality of customer service by enabling immediate verification of product information and inventory status. Although smart glasses have established themselves as an important tool for increasing efficiency and productivity in industrial settings, they still face difficulties in field application due to several limitations. In particular, problems arising when using smart glasses to classify items or evaluate quality have a serious impact on accuracy and work speed. These issues are further exacerbated by the low-resolution image processing capabilities and limited computational resources of smart glasses. For example, in the quality inspection process where it is necessary to check for minute defects or damage to goods, low-resolution images fail to clearly reveal these defects, resulting in the cumbersome situation where workers must visually inspect them. This increases time consumption, raises worker fatigue, and can ultimately lower work accuracy. In addition, smart glasses are designed as miniaturized devices, which limits their ability to process complex data in real time. In particular, when numerous items need to be rapidly detected and evaluated in the field, such as in manufacturing or logistics warehouses, limitations in computing resources can reduce work efficiency. Furthermore, because smart glasses have limited data processing capabilities, they face limitations in processing large amounts of data in real time or performing complex post-processing. Although artificial intelligence models need to be able to continuously learn based on various situations and data occurring in the field, it is difficult for current smart glasses to perform such tasks. As a result, users may frequently make mistakes when classifying or evaluating items, which can lead to errors in processing tasks. For example, when classifying items into normal and defective products, if a user makes an incorrect judgment, it can cause confusion in the entire workflow. [Prior Art Literature] [Patent Literature] Republic of Korea Registered Patent No. 10-2200619 FIG. 1 is a drawing of an embodiment of a service environment according to the present invention. FIG. 2 is a block diagram illustrating an example of a component that may be included in smart glasses according to the present invention. FIG. 3 is a block diagram illustrating an example of a component that a cloud server according to the present invention may include. FIG. 4 is a flowchart of a method for improving the performance of object detection by smart glasses and a cloud server according to the present invention. FIG. 5 is a flowchart for the step of recognizing and classifying objects according to one embodiment of the present invention. FIG. 6 is a flowchart of the evaluation step according to one embodiment of the present invention. FIG. 7 is a flowchart according to the steps displayed to a user according to one embodiment of the present invention. FIG. 8 is an example image of a product evaluated through smart glasses according to one embodiment of the present invention. FIG. 9 is an example image of a defective product evaluated through smart glasses according to one embodiment of the present invention. The present invention will be described below with reference to the attached drawings. However, the present invention may be implemented in various different forms and is therefore not limited to the embodiments described herein. Furthermore, in order to clearly explain the present invention in the drawings, parts unrelated to the explanation have