KR-20260064091-A - METHOD, SYSTEM, AND COMPUTER PROGRAM FOR PROVIDING ANNOTATION INFORMATION IN IMAGE
Abstract
An image annotation system according to one embodiment of the technical concept of the present disclosure comprises a memory for storing at least one instruction and at least one processor for executing and processing said at least one instruction, wherein the at least one processor receives an input image containing an annotation target, obtains annotation information corresponding to a region of said annotation target included in said input image using a learned segmentation model, receives a correction input for the obtained annotation information, and provides corrected annotation information in response to the received correction input.
Inventors
- 구형일
- 권경범
- 박상현
- 나우진
Assignees
- 아주대학교산학협력단
Dates
- Publication Date
- 20260507
- Application Date
- 20241031
Claims (16)
- In an image annotation system comprising at least one computing device, Memory for storing at least one instruction; and It includes at least one processor that executes and processes the above at least one instruction, and The above-mentioned at least one processor is, Receive an input image containing an annotation target, and Using a learned segmentation model, annotation information corresponding to the region of the annotation target included in the input image is obtained, and Receive adjustment input for the acquired annotation information, and Providing adjusted annotation information in response to received adjustment input, Image annotation system.
- In paragraph 1, The above-mentioned at least one processor is, The above input image is input into the above segmentation model, and An inference mask image representing an inference region for the annotation target is obtained from the above segmentation model, and Boundary information for the inference region is extracted from the acquired inference mask image, and Acquiring the annotation information by extracting multiple annotation points based on extracted boundary information, Image annotation system.
- In paragraph 2, The above-mentioned at least one processor is, By approximating the points forming the contour of the above-mentioned inference region, a plurality of boundary points corresponding to the contour are generated, and Sampling the plurality of annotation points from the generated plurality of boundary points, Image annotation system.
- In paragraph 3, The above plurality of annotation points are sampled such that the distance between them is constant, Image annotation system.
- In paragraph 2, The above-mentioned at least one processor is, Setting the number of annotation points based on the area or shape complexity of the above inference region, Image annotation system.
- In paragraph 2, The above-mentioned at least one processor is, Generate a mask image corresponding to the adjusted annotation information, and Controlling the learning of the segmentation model based on the error between the inference mask image obtained from the segmentation model and the generated mask image. Image annotation system.
- In paragraph 1, The above-mentioned at least one processor is, Outputs a control interface including the acquired annotation information and the input image, and Based on the output adjustment interface, adjustment input for the annotation information is received, and In response to the received adjustment input, moving the position of at least some of the plurality of annotation points included in the annotation information, deleting at least some of them, or adding annotation points. Image annotation system.
- In Paragraph 7, The above-mentioned at least one processor is, Adjusting an annotation region through interpolation of a plurality of annotation points adjusted in response to the above-mentioned adjustment input, Image annotation system.
- In paragraph 8, The above-mentioned at least one processor is, Performing interpolation for the plurality of annotation points by applying an interactive spline interpolation technique, Image annotation system.
- As an image annotation method performed by at least one computing device, A step of receiving an input image containing an annotation target; A step of obtaining annotation information corresponding to the region of the annotation target included in the input image using a learned segmentation model; A step of receiving adjustment input for acquired annotation information; and A step comprising providing adjusted annotation information in response to a received adjustment input, Image annotation methods.
- In Paragraph 10, The step of obtaining the above annotation information is, A step of obtaining an inference mask image representing an inference region for the annotation target from the segmentation model; A step of extracting boundary information for the inference region from the acquired inference mask image; and A step comprising obtaining annotation information by extracting a plurality of annotation points based on extracted boundary information, Image annotation methods.
- In Paragraph 11, A step of generating a mask image corresponding to the above-mentioned adjusted annotation information; and A method further comprising the step of controlling the learning of the segmentation model based on the error between the inference mask image obtained from the segmentation model and the generated mask image. Image annotation methods.
- In Paragraph 10, The step of receiving the above adjustment input is, A step of outputting a control interface including the acquired annotation information and the input image; and Based on the output adjustment interface, the method includes the step of receiving adjustment input for the annotation information, and The above adjustment input is, Input including moving the location of at least some of the multiple annotation points included in the annotation information, deleting at least some of them, or adding annotation points. Image annotation methods.
- In Paragraph 13, The step of providing the above-mentioned adjusted annotation information is: A step of adjusting a plurality of annotation points in response to the received adjustment input; A step of adjusting an annotation region through interpolation for a plurality of adjusted annotation points; and A step comprising providing adjusted annotation information corresponding to an adjusted annotation area, Image annotation methods.
- In Paragraph 14, The step of adjusting the annotation area through interpolation for the above-mentioned multiple annotation points is: A method comprising the step of performing interpolation for the plurality of annotation points by applying an interactive spline interpolation technique. Image annotation methods.
- A computer program stored on a computer-readable recording medium for executing an image annotation method according to any one of paragraphs 10 to 15 on a computer.
Description
Method, System, and Computer Program for Providing Annotation Information in Images The technical concept of the present disclosure relates to a method, system, and computer program for providing annotation information of an image. Recently, research on the application of artificial intelligence technology is actively underway in various specialized fields, and in some areas, it has already proven its outstanding performance. Meanwhile, since accuracy and reliability are critical when applying artificial intelligence technology to processes such as detecting lesions within images in the medical field or detecting defects from product images in manufacturing, it is necessary to provide detection results (annotations) that reflect the opinions or feedback of experts (such as doctors) in the relevant field regarding the detection results using AI models. However, conventional annotation tools have low utility due to various problems such as the inconvenience of incorporating expert feedback, increased time and processing costs, and data security issues. A brief description of each drawing is provided to help to better understand the drawings cited in the present disclosure. FIG. 1 is a diagram for schematically describing an image annotation system that provides annotation information of an image according to an exemplary embodiment of the present disclosure. Figure 2 is a block diagram illustrating the configuration of the image annotation system illustrated in Figure 1. FIG. 3 is an example diagram of an annotation area generation operation performed through the analysis area setting unit or interpolation unit illustrated in FIG. 2. Figure 4 is a diagram for explaining the specific operation of the mask generation unit illustrated in Figure 2. Figure 5 is a diagram illustrating the learning operation of a segmentation model performed by the segmentation model learning unit. FIG. 6 is a diagram for explaining the configuration and operation of the annotator shown in FIG. 2 in more detail. FIG. 7 is a flowchart for explaining a method of learning a segmentation model according to an exemplary embodiment of the present disclosure. FIG. 8 is a flowchart for explaining a method for providing annotation information according to an exemplary embodiment of the present disclosure. FIG. 9 is a schematic hardware configuration block diagram of a computing device constituting an image annotation system according to an embodiment of the present disclosure. Exemplary embodiments according to the technical concept of the present disclosure are provided to more fully explain the technical concept of the present disclosure to those skilled in the art, and the following embodiments may be modified in various different forms, and the scope of the technical concept of the present disclosure is not limited to the following embodiments. Rather, these embodiments are provided to make the present disclosure more faithful and complete and to fully convey the technical concept of the present disclosure to those skilled in the art. In this disclosure, terms such as "first," "second," etc. are used to describe various members, regions, layers, parts, and/or components; however, it is obvious that these members, parts, regions, layers, parts, and/or components should not be limited by these terms. These terms do not imply a specific order, hierarchy, or superiority, and are used solely to distinguish one member, region, part, or component from another. Accordingly, the first member, region, part, or component described below may refer to the second member, region, part, or component without departing from the teachings of the technical concept of this disclosure. For example, without departing from the scope of rights of this disclosure, the first component may be named the second component, and similarly, the second component may be named the first component. Unless otherwise defined, all terms used herein, including technical and scientific terms, have the same meaning as commonly understood by those skilled in the art to which the concept of this disclosure belongs. Furthermore, commonly used terms, such as those defined in advance, should be interpreted as having a meaning consistent with what they mean in the context of the relevant technology, and should not be interpreted in an overly formal sense unless explicitly defined herein. Where an embodiment can be implemented differently, a specific process sequence may be performed differently from the order described. For example, two processes described consecutively may be performed substantially simultaneously or in the reverse order of the description. In the attached drawings, variations of the depicted shapes may be expected, for example, depending on manufacturing technology and/or tolerances. Accordingly, embodiments based on the technical concept of the present disclosure should not be interpreted as being limited to specific shapes of the areas depicted in the present disclo