Search

CN-117636444-B - Health condition digital integration method and system applied to intelligent mirror

CN117636444BCN 117636444 BCN117636444 BCN 117636444BCN-117636444-B

Abstract

The invention relates to the technical field of data integration, in particular to a health status digital integration method and system applied to an intelligent mirror. The method comprises the steps of obtaining a user face image and a whole body motion image based on an intelligent mirror embedded camera, carrying out detail enhancement on the user face image to generate a detail enhanced face image, carrying out face structure analysis on the detail enhanced face image to generate face structure data, carrying out micro-expression recognition on the detail enhanced face image based on the face structure data to generate user emotion data, carrying out user visual feature analysis on the detail enhanced face image to generate user visual feature data, carrying out pupil morphological change analysis on the detail enhanced face image according to the user visual feature data to generate pupil morphological feature data, and carrying out light difference sensitivity analysis on the pupil morphological feature data to generate user eye physiological signal data. The invention realizes high-efficiency and accurate health status digital integration.

Inventors

  • XIAO MINGXI
  • HU YUANJI
  • YU ZHAOQING
  • YANG CHENHUI
  • Peng Tangle

Assignees

  • 赣南师范大学

Dates

Publication Date
20260505
Application Date
20231226

Claims (8)

  1. 1. The health condition digital integration method applied to the intelligent mirror is characterized by comprising the following steps of: Step S1, acquiring a user face image and a whole body motion image based on an intelligent mirror embedded camera, carrying out detail enhancement on the user face image to generate a detail enhanced face image, carrying out face structure analysis on the detail enhanced face image to generate face structure data, carrying out micro expression recognition on the detail enhanced face image based on the face structure data, and generating user emotion data; the step S2 specifically comprises the following steps: s21, performing user visual feature analysis on the detail enhanced facial image to generate user visual feature data; S22, carrying out eye track optical flow tracking on the detail enhanced facial image according to the user visual characteristic data so as to generate user eye track data; step S23, carrying out gazing focus recognition on the eye track data of the user so as to generate eye gazing focuses; Step S24, pupil morphology change analysis is carried out on the detail enhancement face image based on the eye gaze focus so as to generate pupil morphology feature data; s25, performing sensitivity analysis on pupil morphological feature data to generate user eye physiological signal data; the specific steps of step S24 are as follows: Step S241, pupil scaling analysis is carried out on the detail enhancement face image based on the eye gaze focus so as to generate pupil scaling data; Step S242, performing curve fitting on pupil scaling data to generate a pupil scaling curve; Step S243, carrying out edge contour evolution analysis on the detail enhanced face image based on a pupil scaling curve to generate pupil edge contour change data; step S244, carrying out non-roundness structure calculation on pupil edge contour change data to generate pupil contour non-roundness parameters; step S245, pupil dynamic change characteristic analysis is carried out on the pupil scaling curve through the pupil contour non-roundness parameters so as to generate a pupil dynamic scaling rule; step S246, pupil morphological change analysis is carried out on the detail enhanced facial image according to a pupil dynamic scaling rule so as to generate pupil morphological feature data; Step S3, carrying out facial fine skin analysis on the detail enhanced facial image based on the user emotion data to generate facial fine skin characteristic data; Step S4, carrying out dynamic feature recognition on the whole-body moving image to generate user dynamic feature data, carrying out muscle group morphological structure analysis on the whole-body moving image according to the user dynamic feature data to generate muscle group morphological structure data, and carrying out muscle health condition analysis on the muscle group morphological structure data to generate muscle health condition data; Step S5, performing user three-dimensional skeleton reconstruction on the whole-body motion image to generate a user three-dimensional skeleton model, performing skeleton structure feature analysis on the user three-dimensional skeleton model to generate skeleton structure feature data, and performing abnormal part analysis on the skeleton structure feature data to generate skeleton abnormal structure data; and S6, performing instant fusion analysis on the muscle health condition data and the bone abnormal structure data by using a deep learning algorithm to generate user dynamic musculoskeletal data, performing potential risk trend analysis on the user dynamic musculoskeletal data to generate musculoskeletal risk trend data, performing holographic visual modeling on the musculoskeletal risk trend data and the digital facial model by using a circular convolution algorithm, and constructing a dynamic holographic visual model to execute health condition digital integration operation.
  2. 2. The method for digitally integrating health conditions applied to intelligent mirrors according to claim 1, wherein the specific steps of step S1 are: Step S11, acquiring a facial image and a whole body moving image of a user based on an intelligent mirror embedded camera; Step S12, detail enhancement is carried out on the face image of the user so as to generate a detail enhancement face image; step S13, performing facial feature node recognition on the detail enhanced facial image to generate facial feature node position data; step S14, carrying out facial structure analysis on the detail enhanced facial image according to the facial feature node position data to generate facial structure data; s15, performing microexpressive recognition on the detail enhanced facial image based on the facial structure data to generate user microexpressive data; and S16, carrying out emotion fluctuation analysis on the micro-expression data of the user to generate emotion data of the user.
  3. 3. The method for digitally integrating health status for intelligent mirrors according to claim 1, wherein the specific steps of step S25 are: step S251, performing multi-frequency illumination on the user to obtain pupil illumination reaction data of the user; Step S252, performing multi-frequency illumination reaction rate analysis on the pupil illumination reaction data of the user to generate a multi-frequency light intensity response characteristic curve; Step S253, pupil zooming and saturation time detection is carried out on pupil morphological feature data according to the multi-frequency light intensity response characteristic curve so as to generate zooming and saturation time data; Step S254, performing multi-frequency light inertial strain analysis on pupil morphological feature data to generate pupil multi-frequency morphological change data; Step S255, performing pupil light loss recovery characteristic analysis on pupil multifrequency morphological change data by utilizing the zoom saturation time data so as to generate pupil restorability data; Step S256, performing light difference sensitivity analysis on the pupil restorability data to generate pupil light sensitivity data; step S257, eye physiological health analysis is carried out on the pupil morphological feature data based on the pupil photosensitivity data so as to generate eye physiological signal data of the user.
  4. 4. The method for digitally integrating health conditions applied to intelligent mirrors according to claim 1, wherein the facial fine skin analysis includes skin texture analysis and facial pigment distribution analysis, and the specific steps of step S3 are as follows: S31, performing skin texture recognition on the detail enhanced facial image based on the emotion data of the user to generate skin texture data; S32, performing skin texture analysis on the skin texture data to generate facial skin texture data; S33, carrying out facial pigment distribution analysis on the detail enhanced facial image to generate pigment uniformity status data; Step S34, performing skin characteristic analysis on the facial skin texture data and the pigment uniformity status data to generate facial fine skin characteristic data; Step S35, constructing a face portrait of the user on the physiological signal data of the eyes of the user and the facial fine skin characteristic data to construct a digital face model.
  5. 5. The method for digitally integrating health status applied to intelligent mirrors according to claim 1, wherein the specific steps of step S4 are: Step S41, carrying out dynamic feature recognition on the whole-body moving image to generate user dynamic feature data; Step S42, performing muscle tissue fine granularity image segmentation on the whole-body motion image according to the dynamic characteristic data of the user so as to generate a muscle tissue image; Step S43, carrying out muscle mass quantification on the muscle tissue image to generate muscle mass data; step S44, performing muscle group distribution analysis on the muscle mass data to generate muscle group distribution data; s45, performing muscle group morphological structure analysis on the muscle tissue image through the muscle group distribution data to generate muscle group morphological structure data; and S46, analyzing the muscle health status of the morphological structure data of the muscle groups to generate the muscle health status data.
  6. 6. The method for digitally integrating health status applied to intelligent mirrors according to claim 1, wherein the specific steps of step S5 are: Step S51, performing node point cloud identification on the whole-body moving image by utilizing a computer vision technology so as to generate node point cloud data; s52, performing user three-dimensional skeleton reconstruction on the joint point cloud data to generate a user three-dimensional skeleton model; step S53, carrying out bone structure symmetry evaluation processing on the three-dimensional bone model of the user to generate bone symmetry data; step S54, performing bone structure feature analysis on the bone symmetry data to generate bone structure feature data; step S55, analyzing abnormal parts of the bone structure characteristic data to generate bone abnormal structure data.
  7. 7. The method for digitally integrating health status for intelligent mirrors according to claim 1, wherein the specific steps of step S6 are: Step S61, performing instant fusion analysis on the muscle health condition data and the bone abnormal structure data by using a deep learning algorithm to generate user dynamic musculoskeletal data; Step S62, carrying out implicit association analysis on the dynamic musculoskeletal data of the user to generate dynamic musculoskeletal association data; step S63, carrying out potential risk analysis on dynamic musculoskeletal data of a user according to the dynamic musculoskeletal association data to obtain musculoskeletal potential risk data; step S64, predicting risk trend of the musculoskeletal potential risk data to generate musculoskeletal risk trend data; and S65, carrying out holographic visual modeling on the myoskeletal risk trend data and the digital facial model by using a circular convolution algorithm, and constructing a dynamic holographic visual model so as to execute the health status digital integration operation.
  8. 8. A health status digitizing integrated system for use with a smart mirror, for performing the health status digitizing integrated method for use with a smart mirror as claimed in claim 1, comprising: the face structure analysis module is used for acquiring a face image and a whole body motion image of a user based on the intelligent mirror embedded camera, carrying out detail enhancement on the face image of the user to generate a detail enhanced face image, carrying out face structure analysis on the detail enhanced face image to generate face structure data, carrying out micro-expression recognition on the detail enhanced face image based on the face structure data, and generating emotion data of the user; The pupil morphology feature module is used for carrying out user visual feature analysis on the detail enhanced facial image to generate user visual feature data, carrying out pupil morphology change analysis on the detail enhanced facial image according to the user visual feature data to generate pupil morphology feature data, and carrying out light difference sensitivity analysis on the pupil morphology feature data to generate user eye physiological signal data; The skin feature module is used for carrying out facial fine skin analysis on the detail enhanced facial image based on the user emotion data to generate facial fine skin feature data; The muscle group morphological structure module is used for carrying out dynamic characteristic recognition on the whole-body moving image to generate user dynamic characteristic data, carrying out muscle group morphological structure analysis on the whole-body moving image according to the user dynamic characteristic data to generate muscle group morphological structure data, and carrying out muscle health condition analysis on the muscle group morphological structure data to generate muscle health condition data; the system comprises a three-dimensional bone model module, a bone structure characteristic analysis module, a bone structure abnormal structure analysis module and a bone structure abnormal structure analysis module, wherein the three-dimensional bone model module is used for carrying out user three-dimensional bone reconstruction on a whole body moving image to generate a user three-dimensional bone model; The holographic visual model module is used for carrying out instant fusion analysis on muscle health condition data and bone abnormal structure data by using a deep learning algorithm to generate user dynamic musculoskeletal data, carrying out potential risk trend analysis on the user dynamic musculoskeletal data to generate musculoskeletal risk trend data, carrying out holographic visual modeling on the musculoskeletal risk trend data and the digital facial model by using a cyclic convolution algorithm, and constructing a dynamic holographic visual model so as to execute health condition digital integration operation.

Description

Health condition digital integration method and system applied to intelligent mirror Technical Field The invention relates to the technical field of data integration, in particular to a health status digital integration method and system applied to an intelligent mirror. Background The intelligent mirror is an emerging intelligent health device, combines a mirror and an intelligent technology, aims to provide a digital integration method of personal health conditions, and is a convenient and practical tool for monitoring the health condition of a user in real time and providing personalized health advice through integrating health data as the requirements of people on health monitoring and management increase. Disclosure of Invention The invention provides a health condition digital integration method and system applied to an intelligent mirror for solving at least one technical problem, and the method comprises the following steps: Step S1, acquiring a user face image and a whole body motion image based on an intelligent mirror embedded camera, carrying out detail enhancement on the user face image to generate a detail enhanced face image, carrying out face structure analysis on the detail enhanced face image to generate face structure data, carrying out micro expression recognition on the detail enhanced face image based on the face structure data, and generating user emotion data; step S2, performing user visual characteristic analysis on the detail enhanced facial image to generate user visual characteristic data, performing pupil morphological change analysis on the detail enhanced facial image according to the user visual characteristic data to generate pupil morphological characteristic data, and performing light difference sensitivity analysis on the pupil morphological characteristic data to generate user eye physiological signal data; Step S3, carrying out facial fine skin analysis on the detail enhanced facial image based on the user emotion data to generate facial fine skin characteristic data; Step S4, carrying out dynamic feature recognition on the whole-body moving image to generate user dynamic feature data, carrying out muscle group morphological structure analysis on the whole-body moving image according to the user dynamic feature data to generate muscle group morphological structure data, and carrying out muscle health condition analysis on the muscle group morphological structure data to generate muscle health condition data; Step S5, performing user three-dimensional skeleton reconstruction on the whole-body motion image to generate a user three-dimensional skeleton model, performing skeleton structure feature analysis on the user three-dimensional skeleton model to generate skeleton structure feature data, and performing abnormal part analysis on the skeleton structure feature data to generate skeleton abnormal structure data; and S6, performing instant fusion analysis on the muscle health condition data and the bone abnormal structure data by using a deep learning algorithm to generate user dynamic musculoskeletal data, performing potential risk trend analysis on the user dynamic musculoskeletal data to generate musculoskeletal risk trend data, performing holographic visual modeling on the musculoskeletal risk trend data and the digital facial model by using a circular convolution algorithm, and constructing a dynamic holographic visual model to execute health condition digital integration operation. The invention provides comprehensive observation of the appearance and dynamic posture of a user by acquiring a facial image and a whole body moving image of the user, the detail enhancement facial image improves the definition and visibility of the facial image by enhancing the detail, the facial structure analysis extracts facial feature information such as facial contours, eye positions and the like, the micro-expression recognition captures the emotional state of the user by the analysis of facial micro-movements, the user visual feature data provides detailed information about the appearance of the user face such as skin condition, facial features and the like, the pupil morphological change analysis reveals the attention level and emotional state of the user, the pupil morphological feature data and eye physiological signal data provide quantitative information about the visual perception and the attention level of the user, the facial micro-skin analysis provides information about the skin health condition, the skin texture and the like of the user, the construction of the face image of the user is based on emotion data, eye physiological signal data and facial fine skin characteristic data, the facial characteristics and the emotion states of the user are comprehensively described, a digital facial model is used as a visual representation of the facial characteristics of the user, a basis is provided for subsequent health condition analysis, the gesture and movement mode of the u