Search

CN-121998723-A - Charging method and system for self-adaptive user behavior characteristics

CN121998723ACN 121998723 ACN121998723 ACN 121998723ACN-121998723-A

Abstract

The invention relates to the technical field of image processing, in particular to a self-adaptive user behavior characteristic charging method and a self-adaptive user behavior characteristic charging system, wherein the method comprises the steps of calculating a pixel gray gradient field of an original image and identifying a face area; the face region is subjected to binarization processing to obtain a binarization pixel matrix, pixels with adjacent spaces and identical pixel values are clustered to be connected domains, a plurality of topological feature data subsets are constructed, the central point pixel coordinates of the topological feature data subsets are calculated and extracted, the skin compression rate is determined, the deflection angle and the skin compression rate are used for calculating the correction coefficient of each pixel point in the skin damage pixel region, the number of equivalent pixel points below a mask is determined, and the charging value is calculated. The invention improves the accuracy of judging the actual size of the skin damage area, thereby improving the charging accuracy.

Inventors

  • LIU LIQIANG

Assignees

  • 广州市中崎商业机器股份有限公司

Dates

Publication Date
20260508
Application Date
20260207

Claims (10)

  1. 1. A method for charging for adaptive user behavior features, comprising: obtaining a facial image of a patient, calculating a pixel gray gradient field of an original image, and identifying a facial region according to a gradient vector of the pixel gray gradient field; Traversing the neighborhood communication relation of each pixel point in the binarization pixel matrix, and clustering the pixel points which are adjacent in space and have the same pixel value into a communication domain; Constructing a plurality of topological feature data subsets, and calculating and extracting central point pixel coordinates of each topological feature data subset, wherein the kth topological feature data subset comprises all coordinates in the kth connected domain, and one topological feature data subset is an eye pixel coordinate set; calculating a three-dimensional rotation deflection angle of the head according to the distance between the pixel coordinates of each center point; calculating correction coefficients of all pixel points in the skin-damaged pixel area according to the three-dimensional rotation deflection angle and the skin compression rate; And calculating a charging value according to the correction coefficient of each pixel point in the skin-damaged pixel area and the number of the equivalent pixel points.
  2. 2. The method of claim 1, wherein calculating a pixel gray gradient field of the original image and identifying the face region based on a gradient vector of the pixel gray gradient field comprises: Calculating partial derivatives of each pixel point in the original image in the horizontal direction and the vertical direction, and constructing a two-dimensional gradient vector field; a set of vectors in the gradient vector field converging towards a center is identified and a closed region to which the set of vectors is directed is determined as a facial region.
  3. 3. The method for charging for adaptive user behavior according to claim 1, wherein clustering pixels spatially adjacent and having the same pixel value into connected domains comprises: Calculating an optimal global threshold according to an Otsu algorithm, performing binarization processing on an image, and determining a geometric boundary of a topological feature subdomain; and performing traversing scanning according to the 8-neighborhood connectivity of the pixel space, and clustering adjacent pixels with the same pixel value into connected domains.
  4. 4. The method of claim 1, wherein extracting the center point pixel coordinates of the kth topological feature data subset comprises: calculating the central point pixel abscissa of the kth topological feature data subset Wherein the calculation formula is as follows: X k,j is the abscissa of the j-th pixel coordinate in the k-th connected domain, and m k is the number of all pixel points in the k-th connected domain; Calculating the ordinate of the central point pixel of the kth topological feature data subset Obtaining the pixel coordinates of the central point of the kth topological feature data subset, wherein the calculation formula is as follows: Y k,j is the ordinate of the j-th pixel coordinate in the k-th connected domain.
  5. 5. The method of claim 4, wherein the plurality of topological feature data subsets comprise a left/right eye pixel coordinate set, a nose pixel coordinate set, and a face contour pixel coordinate set, wherein a center point pixel of the left eye pixel coordinate set is labeled as a left eye center coordinate, a center point pixel of the right eye pixel coordinate set is labeled as a right eye center coordinate, a center point pixel of the nose pixel coordinate set is labeled as a nose center coordinate, and a center point pixel of the face contour pixel coordinate set is labeled as a face center coordinate.
  6. 6. The method for charging for adaptive user behavior according to claim 5, wherein calculating the three-dimensional rotation bias angle of the head comprises: and calculating a symmetry feature value R h , wherein the calculation formula is as follows: X R is the abscissa of the right eye center coordinate, x L is the abscissa of the left eye center coordinate, and x N is the abscissa of the nose center coordinate; Calculating a longitudinal proportion characteristic value R v , wherein the calculation formula is as follows: d LR-N is the vertical distance from the nose center coordinate to the connecting line of the left eye center coordinate and the right eye center coordinate, and d L-R is the distance between the left eye center coordinate and the right eye center coordinate; Calculating a horizontal rotation deflection angle theta y , wherein a calculation formula is as follows The field angle corresponding to the binocular distance captured by the camera in the front face reference state is the field angle; calculating a vertical pitching deflection angle theta p to obtain a three-dimensional rotation deflection angle Wherein the calculation formula is And R v0 is a vertical proportion characteristic value reference value with a preset size.
  7. 7. The method of claim 1, wherein determining the skin compression rate based on aspect ratio data of pixels in the set of eye pixel coordinates comprises: obtaining the pixel distribution proportion of the eye pixel set in the directions of a vertical axis and a horizontal axis, and calculating the aspect ratio; Comparing the aspect ratio with a preset aspect ratio threshold, wherein if the aspect ratio is lower than the aspect ratio threshold, the skin is judged to be in a folding and squeezing state; The skin compressibility is calculated using a continuous function based on the linear difference between the aspect ratio and the threshold.
  8. 8. A method of billing for adaptive user behavior features of claim 1 wherein geometrically fitting the skin lesion edges truncated by the occlusion mask data to determine the number of equivalent pixel points under the mask comprises: acquiring the intersection boundary of the shielding mask data and the skin damage pixel region, and extracting curvature change rates of skin damage edges at two sides of the boundary; carrying out geometric path prediction in a shielding mask area based on the curvature change rate through a spline curve algorithm, and constructing a closed simulation skin damage edge with logic communication; calculating the number of equivalent pixel points surrounded by the simulated skin damage edge in the coverage area of the shielding mask data 。
  9. 9. The method for charging the self-adaptive user behavior feature according to claim 1, wherein calculating the charging value according to the correction coefficient of each pixel point in the skin-damaged pixel area and the number of the equivalent pixel points comprises: obtaining an included angle between the normal vector of the surface of the ith skin-damaged pixel point and the optical axis of the lens according to the three-dimensional rotation deflection angle ; Obtaining the linear compensation coefficient of the ith skin loss pixel point according to the skin compression rate ; Calculating an adaptive correction coefficient K i of the ith skin-loss pixel point, wherein the formula is as follows: ; Calculating a charging value P, wherein the calculation formula is as follows: is a price unit of a preset size.
  10. 10. A charging system for adaptive user behavior features, comprising a processor and a memory, characterized in that the memory stores a computer program, which is executed by the processor to implement a charging method for adaptive user behavior features as claimed in any of the claims 1-9.

Description

Charging method and system for self-adaptive user behavior characteristics Technical Field The present invention relates generally to the field of image processing technology. More particularly, the present invention relates to a charging method and system for self-adapting user behavior characteristics. Background Along with the development of medical industry, the demands for fine management of the projects such as facial spots, moles or tattooing treatment are increasing, but the current medical and aesthetic institutions generally still adopt the traditional mode of manual estimation or fixed area pricing for charging. Although some prior arts attempt to divide the treatment area into charging units with fixed sizes for automatic statistics by using image recognition, in practical application, due to the complex three-dimensional structure of the face, the apparent pixel size of the same skin lesion in the image will be significantly different under the conditions of front, side or different shooting distances. In addition, human tissues have flexible deformation characteristics, and the change of expression (such as laughing and squeezing eyes) or gravity influence of a patient during shooting can cause the skin to wrinkle or stretch, so that the pixel density of a skin damage area can be changed in a nonlinear manner. Meanwhile, in an actual shooting scene, shielding objects such as hair, glasses frames or masks often exist, so that the counted focus area is smaller. The interferences caused by the shooting view angle, skin deformation and external shielding lead to low accuracy of charging caused by low accuracy of judging the skin damage area in the actual charging scene in the prior art. Disclosure of Invention In order to solve the technical problem of low charging accuracy caused by low accuracy of judging the skin area in the actual charging scene, the invention provides the following aspects. In a first aspect, a charging method for self-adaptive user behavior features includes obtaining a face image of a patient, calculating a pixel gray gradient field of an original image, identifying a face area according to a gradient vector of the pixel gray gradient field, performing binarization processing on the face area to obtain a binarized pixel matrix, traversing a neighborhood communication relation of each pixel point in the binarized pixel matrix, clustering pixels with the same pixel value and adjacent space into a communication domain, constructing a plurality of topological feature data subsets, calculating and extracting center point pixel coordinates of each topological feature data subset, wherein the kth topological feature data subset comprises all coordinates in the kth communication domain, one topological feature data subset is an eye pixel coordinate set, calculating a three-dimensional rotation offset angle of a head according to a distance between the pixel coordinates of each center point, determining skin compression rate according to aspect ratio data of the pixel points in the eye pixel coordinate set, obtaining skin loss pixel area and shielding mask data in the face area, calculating correction coefficients of each pixel point in the skin loss pixel area according to the three-dimensional rotation offset angle and the skin compression rate, calculating and calculating the number of equivalent mask values of the pixel points in the skin loss pixel area according to the geometric mask value and the number of the equivalent mask values of the pixel points in the eye pixel coordinate set. Preferably, calculating the pixel gray gradient field of the original image and identifying the face area according to the gradient vector of the pixel gray gradient field comprises the steps of calculating partial derivatives of each pixel point in the original image in the horizontal direction and the vertical direction to construct a two-dimensional gradient vector field, identifying a vector set converging towards the center in the gradient vector field, and determining a closed area pointed by the vector set as the face area. Preferably, clustering the pixels with the same pixel values and adjacent spaces into connected domains comprises extracting pixel edges in the face region through a Canny operator, calculating an optimal global threshold according to an Otsu algorithm, performing binarization processing on an image to determine the geometric boundary of a topological feature subdomain, performing traversal scanning according to 8-neighborhood connectivity of the pixel space, and clustering the adjacent pixels with the same pixel values into the connected domains. Preferably, extracting the center point pixel coordinates of the kth topological feature data subset comprises calculating the center point pixel abscissa of the kth topological feature data subsetWherein the calculation formula is as follows: X k,j is the abscissa of the j-th pixel coordinate in the k-th connected domain, m