CN-121983262-A - AI automatic charging method and system for removing facial spots in picoseconds
Abstract
The invention relates to the technical field of image processing, in particular to an AI automatic charging method and system for removing facial spots in picoseconds, wherein the method comprises the steps of obtaining a facial image of a patient; the method comprises the steps of preprocessing a face image to obtain a target image, inputting the target image into a trained semantic segmentation model to obtain a color spot area and a background area, marking pixels in the color spot area as target pixels, calculating a color spot total area estimation coefficient of a patient, inputting the color spot total area estimation coefficient into a preset mapping function to obtain treatment cost, encrypting the face image, the color spot total area estimation coefficient and the treatment cost, and outputting the encrypted treatment cost. The invention gives more objective cost, establishes a transparent charging mechanism which can not be tampered, and improves the medical experience of patients.
Inventors
- FAN KAI
Assignees
- 广州市中崎商业机器股份有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20260207
Claims (10)
- 1. An AI automatic charging method for removing facial stains in picoseconds, comprising: preprocessing the facial image to obtain a target image; inputting the target image into a trained semantic segmentation model to obtain a color spot area and a background area; When the pixel point of the ith row and the jth column of the binary semantic segmentation mask is 1, the pixel point of the ith row and the jth column of the target image is the target pixel point, and i and j are both positive integers; calculating a color spot total area estimation coefficient of a patient, wherein the color spot total area estimation coefficient comprises a central point coordinate of a facial image and coordinates of each target pixel point, a gradient compensation coefficient of each target pixel point is calculated according to the central point coordinate and the coordinates of each target pixel point, and the color spot total area estimation coefficient is calculated according to the area gradient compensation coefficient of each target coordinate point; and encrypting the facial image, the total area estimation coefficient of the color spots and the treatment expense and then outputting the encrypted treatment expense.
- 2. The AI automatic charging method for picosecond removal of facial stains according to claim 1, wherein preprocessing the facial image comprises: Carrying out illumination component estimation and reflection component recovery on the face image through a multi-scale Retinex algorithm to obtain a first intermediate image; Smoothing the first intermediate image through a bilateral filtering algorithm to obtain a second intermediate image; and converting the second intermediate image into a YCbCr color space to obtain the target image.
- 3. The AI automatic charging method for removing facial stain picoseconds of claim 1, wherein calculating the gradient compensation coefficient of the target pixel comprises: obtaining a patient face area in the face image, and obtaining the length and the width of an circumscribed rectangle of the patient face area, wherein the center point coordinate is taken as the center coordinate of the circumscribed rectangle; And calculating a cosine value of the face skin curved surface angle corresponding to the target pixel point according to the target pixel point, the coordinates of the central point and the length and width of the circumscribed rectangle, and determining the reciprocal of the cosine value as a gradient compensation coefficient of the target pixel point.
- 4. The AI automatic charging method for removing facial color spots picosecond according to claim 3, wherein the formula for calculating the cosine value of the angle of the curved surface of the facial skin corresponding to the target pixel is: ; Wherein, the The cosine value of the angle of the curved surface of the face skin corresponding to the target pixel point, As the difference between the target pixel point transverse axis coordinates and the center point transverse axis coordinates, And W is the length of the circumscribed rectangle, and H is the width of the circumscribed rectangle.
- 5. The AI automatic charging method for removing facial stain picoseconds of claim 4, wherein the formula for calculating the total area estimation coefficient of the stain is: wherein K n is a gradient compensation coefficient of the nth target pixel point, λ is a pixel density constant of a preset size, N is a positive integer, and N is the number of all the target pixel points.
- 6. The AI automatic billing method for facial stain picosecond removal of claim 5, wherein inputting the stain total area estimation coefficients to a predetermined mapping function to obtain treatment costs comprises: obtaining the type of the facial stain of the patient, and obtaining the mapping function according to the type index, wherein the mapping function of the kth type The formula of (2) is: S k is the standard value of the area corresponding to the facial stain of the kth type of patient, and a k is the standard value of the expense corresponding to the facial stain of the kth type of patient; And inputting the estimated factors of the total areas of the color spots into a mapping function to obtain the treatment expense.
- 7. The AI automatic billing method for picosecond removal of facial stains of claim 3, wherein obtaining the patient's facial region in the facial image comprises: The face image is converted into a single-channel gray level image, gradient calculation is carried out on the single-channel gray level image through an edge detection operator, and a face contour edge map is extracted; And obtaining a connected domain in the closed face outline, and determining the connected domain with the largest area as the face area of the patient.
- 8. The AI automatic billing method for facial stain picosecond removal of claim 1, wherein encrypting the facial image, the stain total area estimation coefficient, and the treatment cost post-output comprises: constructing a full-flow diagnosis and treatment data packet, wherein the data packet comprises the facial image, the color spot total area estimation coefficient and the treatment cost Encrypting the full-flow diagnosis and treatment data packet by using an encryption algorithm, and calculating a unique digital hash abstract of the full-flow diagnosis and treatment data packet; And synchronously uploading the digital hash abstract to a alliance blockchain network for distributed consensus storage, and outputting a returned blocktransaction hash.
- 9. The AI automatic charging method for picosecond removal of facial stains according to claim 1, wherein obtaining a trained semantic segmentation model comprises: obtaining a plurality of training samples, wherein the training samples comprise training images and binary semantic segmentation masks corresponding to the training images, the training images are historical clinical face images comprising color spot areas, and the binary semantic segmentation masks corresponding to the training images are artificially marked; And training the initial semantic segmentation model of the FCN architecture through the training set to obtain a trained semantic segmentation model.
- 10. An AI automatic charging system for facial stain picosecond removal comprising a processor and a memory, wherein the memory stores a computer program, the processor executing the computer program to implement an AI automatic charging method for facial stain picosecond removal as defined in any of claims 1-9.
Description
AI automatic charging method and system for removing facial spots in picoseconds Technical Field The present invention relates generally to the field of image processing technology. More particularly, the invention relates to an AI automatic charging method and system for removing facial spots in picoseconds. Background Picosecond laser technology is a milestone innovation in the field of modern photocosmetics, which utilizes ultra-short pulse width output energy in the picosecond scale. Picosecond laser mainly generates strong photoacoustic effect, subcutaneous melanin particles can be instantaneously crushed into tiny dust-like substances, and then are rapidly metabolized and discharged through a human lymphatic system, and meanwhile, the risk of thermal damage to surrounding normal skin tissues is greatly reduced. Picosecond laser spot removal has thoroughly replaced traditional lasers by virtue of excellent pigment removal rate, extremely low side effect and shorter postoperative recovery period, and becomes a gold standard means for treating chloasma, freckle, nevus taitianus, nevus fuscosus and other pigment diseases. However, the existing billing mode remains in the original phase of doctor's visualization. This approach relies heavily on the subjective experience of the physician and lacks uniform quantification criteria. The most intuitive experience for patients is that the charging process is too subjective, and the area values given by doctors often confuse and even doubt the patients in the face of the irregularly shaped and scattered color spots. The charging mode which is lack of objective basis and full of subjective randomness directly leads to the distrust of the patient to the medical institution, so that the original pure diagnosis and treatment process becomes complicated 'discussion price counter-price' and even causes medical complaints, and the medical experience of the user is greatly reduced. Disclosure of Invention In order to solve the technical problem that the prior picosecond freckle removing treatment possibly causes poor hospitalization experience of a patient, the invention provides the following aspects. In a first aspect, an AI automatic charging method for removing facial spots in picoseconds comprises the steps of obtaining a facial image of a patient, preprocessing the facial image to obtain a target image, inputting the target image into a trained semantic segmentation model to obtain a spot area and a background area, recording pixels in the spot area as target pixels, wherein the semantic segmentation model is used for generating a binary semantic segmentation mask according to the target image, when the pixels in an ith row and a jth column of the binary semantic segmentation mask are 1, the pixels in the ith row and the jth column of the target image are the target pixels, i and j are positive integers, calculating a total area estimation coefficient of the spots of the patient, wherein the method comprises the steps of obtaining central point coordinates of the facial image and coordinates of each target pixel, calculating gradient compensation coefficients of the target pixel points according to the central point coordinates and the coordinates of the target pixel points, calculating the total spot area estimation coefficient according to the area gradient compensation coefficients of each target coordinate point, inputting the total area estimation coefficient into a preset treatment cost map, and outputting the total treatment cost estimation coefficient of the facial spots. Preferably, preprocessing the face image comprises the steps of carrying out illumination component estimation and reflection component recovery on the face image through a multi-scale Retinex algorithm to obtain a first intermediate image, carrying out smoothing processing on the first intermediate image through a bilateral filtering algorithm to obtain a second intermediate image, and converting the second intermediate image into a YCbCr color space to obtain the target image. Preferably, calculating the gradient compensation coefficient of the target pixel point comprises obtaining a face area of a patient in the face image and obtaining the length and width of an external rectangle of the face area of the patient, wherein the central coordinate of the external rectangle is taken as the central point coordinate, calculating the cosine value of the angle of the curved surface of the face corresponding to the target pixel point according to the target pixel point, the central point coordinate and the length and width of the external rectangle, and determining the reciprocal of the cosine value as the gradient compensation coefficient of the target pixel point. Preferably, the formula for calculating the cosine value of the angle of the curved surface of the facial skin corresponding to the target pixel point is as follows: 。 Wherein, the The cosine value of the angle of the curved surfac