US-20260127797-A1 - FACIAL IMAGE DE-IDENTIFICATION METHOD AND SYSTEM
Abstract
A facial image de-identification method and system are provided. A facial image de-identification method according to some embodiments may include acquiring a facial image, detecting one or more facial feature from the facial image, determining at least some of the detected facial features as a de-identification region of the facial image, and applying an image transformation technique to the determined de-identification region to generate a de-identification image. According to the method, a de-identification image can be created that preserves anatomical structure information such as a facial skeleton as it is while reducing the possibility of individual identification (that is, risk of re-identification).
Inventors
- Woo Jin Kim
- Kun Yong SUNG
- Sang Won Park
- Seong Uk Kang
Assignees
- KNU-INDUSTRY COOPERATION FOUNDATION
Dates
- Publication Date
- 20260507
- Application Date
- 20250430
- Priority Date
- 20241105
Claims (15)
- 1 . A facial image de-identification method performed by at least one processor, the method comprising: acquiring a facial image; detecting one or more facial features from the facial image; determining at least some of the detected facial features as a de-identification region of the facial image; and applying an image transformation technique to the determined de-identification region to generate a de-identification image.
- 2 . The method of claim 1 , wherein the image transformation technique is not applied to a remaining region of the facial image except for the de-identification region.
- 3 . The method of claim 1 , wherein the one or more facial features include at least one of an eye, a nose, a mouth, and an ear.
- 4 . The method of claim 1 , wherein the one or more facial features include at least one of a scar and a birthmark.
- 5 . The method of claim 1 , wherein the facial image is a tomographic image of a facial region.
- 6 . The method of claim 1 , wherein the detecting of the one or more facial features includes acquiring a deep learning model trained to detect the facial feature from an input image, and detecting the one or more facial features through the trained deep learning model.
- 7 . The method of claim 6 , wherein the training of the deep learning model includes acquiring a labeled image set and an unlabeled image set, the labeled image set being an image set to which a facial feature label is assigned, and the number of samples of the unlabeled image set being greater than that of the labeled image set, constructing an auxiliary deep learning model using the labeled image set, generating a training set by assigning the facial feature label to the unlabeled image set using the auxiliary deep learning model, and training the deep learning model using the training set.
- 8 . The method of claim 1 , further comprising: extracting a first feature embedding from the facial image through an image encoder; extracting a second feature embedding from the de-identification image through the image encoder; and calculating a re-identification risk score of the de-identification image based on similarity between the first feature embedding and the second feature embedding.
- 9 . The method of claim 1 , wherein the facial image includes a plurality of slice images generated through tomography, and the de-identification image includes a plurality of de-identification slice images corresponding to the slice images, and the method further includes: performing 3D volume rendering on the slice images to generate a first rendering image; performing the 3D volume rendering on the de-identification slice images to generate a second rendering image; extracting a first feature embedding from the first rendering image and a second feature embedding from the second rendering image through the image encoder; and calculating a re-identification risk score of the de-identification image based on similarity between the first feature embedding and the second feature embedding.
- 10 . The method of claim 1 , wherein the plurality of facial features is detected, and the determining of the detected facial feature as the de-identification region of the facial image includes generating a plurality of de-identification candidate combinations from the plurality of facial features, generating temporary de-identification images by applying the image transformation technique to each of the de-identification candidate combinations, calculating a re-identification risk score of each of the temporary de-identification images, selecting, among the de-identification candidate combinations, a de-identification candidate combination whose re-identification risk score is less than a reference value and satisfies preset conditions as a de-identification target combination, and determining the de-identification region based on the de-identification target combination, and the preset condition is defined based on at least one of the number of facial features and a region size belonging to the de-identification candidate combination.
- 11 . The method of claim 1 , wherein the detected facial feature includes a first facial feature and a second facial feature, the de-identification region includes the first facial feature, and the generating of the de-identification image includes generating an intermediate de-identification image by applying a first image transformation technique to the first facial feature, calculating a re-identification risk score of the intermediate de-identification image, adding the second facial feature to the de-identification region in response to a determination that the re-identification risk score is equal to or more than a reference value, and generating the de-identification image by applying a second image transformation technique to the second facial feature.
- 12 . A facial image de-identification system comprising: one or more processors; and a memory storing a computer program executed by the one or more processors, wherein the computer program includes instructions for acquiring a facial image, detecting one or more facial features from the facial image, determining at least some of the detected facial features as a de-identification region of the facial image, and applying an image transformation technique to the determined de-identification region to generate a de-identification image.
- 13 . The facial image de-identification system of claim 12 , wherein the image transformation technique is not applied to a remaining region of the facial image except for the de-identification region.
- 14 . The facial image de-identification system of claim 12 , wherein the one or more facial features include at least one of an eye, a nose, a mouth, and an ear.
- 15 . A computer program stored on a computer-readable recording medium, coupled with a processor of a computer, to execute: acquiring a facial image; detecting one or more facial features from the facial image; determining at least some of the detected facial features as a de-identification region of the facial image; and applying an image transformation technique to the determined de-identification region to generate a de-identification image.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application claims the priority of Korean Patent Application No. 10-2024-0155485 filed on Nov. 5, 2024, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference. BACKGROUND Field The present disclosure relates to a technology for de-identifying a facial image. Description of the Related Art Generally, when utilizing medical data within a hospital, a separate de-identification process is not required, but when exporting medical data to the outside for research collaboration with external institutions or multi-institutional joint research, the de-identification process is absolutely required to protect personal information of a patient. Among various medical data, a computed tomography (CT) image is data that requires significant de-identification. This is because facial features such as the eyes, nose, mouth, and ears of the patient can be clearly restored from CT images, and these facial features can also be used to identify the patient. Meanwhile, most existing facial image de-identification methods automatically remove a facial region from the facial image to reduce the possibility of individual identification. However, since the method also removes important structures such as the facial skeleton and teeth, the method has an obvious limitation in that the method cannot be utilized in fields (for example, plastic surgery research, or the like) that require anatomical structure information. PRIOR ART LITERATURE Patent Literature (Patent Literature 1) Korean Patent Publication No. 10-2023-0080111 (Published on Jun. 7, 2023) SUMMARY An object according to some embodiments of the present disclosure is to provide a de-identification method and system that may reduce the possibility of individual identification (that is, the risk of re-identification) while preserving overall structural information (for example, anatomical structure information such as facial skeleton, or the like) inherent in a facial image. Objects of the present disclosure are not limited to the object mentioned above, and other objects not mentioned can be clearly understood by those skilled in the art in the technical field of the present disclosure from the description below. In order to achieve the above-described object, according to some embodiments of the present disclosure, there is provided a facial image de-identification method performed by at least one processor, the method including: acquiring a facial image; detecting one or more facial features from the facial image; determining at least some of the detected facial features as a de-identification region of the facial image; and applying an image transformation technique to the determined de-identification region to generate a de-identification image. In some embodiments, the image transformation technique may not be applied to a remaining region of the facial image except for the de-identification region. In some embodiments, the one or more facial features may include at least one of an eye, a nose, a mouth, and an ear. In some embodiments, the one or more facial features may include at least one of a scar and a birthmark. In some embodiments, the facial image may be a tomographic image of a facial region. In some embodiments, the detecting of the one or more facial features may include acquiring a deep learning model trained to detect the facial feature from an input image, and detecting the one or more facial features through the trained deep learning model. In some embodiments, the training of the deep learning model may include acquiring a labeled image set and an unlabeled image set, the labeled image set being an image set to which a facial feature label is assigned, and the number of samples of the unlabeled image set being greater than that of the labeled image set, constructing an auxiliary deep learning model using the labeled image set, generating a training set by assigning the facial feature label to the unlabeled image set using the auxiliary deep learning model, and training the deep learning model using the training set. In some embodiments, the method may further include: extracting a first feature embedding from the facial image through an image encoder; extracting a second feature embedding from the de-identification image through the image encoder; and calculating a re-identification risk score of the de-identification image based on similarity between the first feature embedding and the second feature embedding. In some embodiments, the facial image may include slice images generated through tomography, and the de-identification image may include a plurality of de-identification slice images corresponding to the slice images, and the method may further include performing 3D volume rendering on the slice images to generate a first rendering image, performing the 3D volume rendering on the de-identification slice images to generate a second rendering image, extractin