Search

CN-121999287-A - Intelligent hair identification blowing method based on deep learning

CN121999287ACN 121999287 ACN121999287 ACN 121999287ACN-121999287-A

Abstract

The invention discloses an intelligent hair quality identification blowing method based on deep learning, which comprises the following steps of S1, collecting data and constructing a data input set, S2, inputting a feature pyramid network and introducing a multi-scale feature bidirectional interaction mechanism to generate a unified hair quality feature expression vector, S3, inputting an identification model based on an image structure attention mechanism and a hierarchical label perception structure to output a hair quality type identification result, S4, collecting image frames and extracting dry dynamic change features through optical flow estimation and a global motion aggregation optical flow network, S5, inputting an improved graph strategy optimization network to generate optimal control parameters, S6, executing dry hair control operation under a feedforward-feedback structure, S7, calculating feature changes before and after dry hair to generate a care improvement index, and S8, carrying out strategy feedback and self-adaptive update according to the care improvement index. The invention realizes the deep fusion of the hair quality identification, the hair drying control and the personalized policy optimization, and is suitable for the field of intelligent personal care equipment.

Inventors

  • JIANG ZONGYI
  • WANG JIAODAN
  • LIU ZUOQIANG

Assignees

  • 绍兴市益强电器科技有限公司

Dates

Publication Date
20260508
Application Date
20260123

Claims (9)

  1. 1. The intelligent hair identification blowing method based on deep learning is characterized by comprising the following steps of: S1, acquiring user hair images and environment sensing data, denoising, enhancing and standardizing the image data, and constructing a data input set; S2, inputting a data input set into a feature pyramid network, introducing a multi-scale feature bidirectional interaction mechanism, extracting texture features, luster features and structural features under different scales, and generating a uniform-dimension hair feature representation vector; S3, inputting the hair characteristic expression vector into a hair type recognition model based on a picture structure attention mechanism and a hierarchical label perception structure, constructing a picture structure with nodes as hair samples and edges as characteristic similarity, and outputting a hair type recognition result through the label perception mechanism; s4, collecting continuous image frames in the blowing process, constructing an optical flow estimation model between image frame pairs, and extracting the dry dynamic change characteristics of the hair region through a global motion aggregation optical flow network; S5, combining the hair type recognition result and the drying dynamic change characteristic, constructing a hair region graph structure, inputting an improved graph strategy optimization network, executing multi-round control parameter generation and strategy updating operation, and finally outputting optimal control parameters; S6, dry hair control operation is executed according to the optimal control parameters, the wind speed and the heating power are adjusted in real time by adopting a feedforward-feedback control structure, and dynamic correction is carried out by combining sensor data; S7, acquiring images and sensing data after hair drying is completed, and calculating a nursing improvement index based on the change amplitude of key features before and after hair drying; and S8, generating feedback information according to the nursing improvement index, recording the current hair quality state, the control parameters and the scoring result as experience data if the hair drying effect is poor or the potential risk exists, and inputting the experience data into an improved graph strategy optimization network for next round of parameter adjustment and personalized strategy update.
  2. 2. The intelligent hair quality identification blowing method based on deep learning according to claim 1, wherein the step S1 specifically comprises: s11, acquiring color image data of a head area of a user, and synchronously acquiring environment temperature data, relative humidity data and wind speed data to obtain original environment sensing data; s12, performing noise suppression processing on the image data, replacing random noise pixels in the image by adopting a filtering algorithm based on pixel neighborhood operation, and retaining edge contours and texture lines of a hair region in the image; S13, performing gray distribution adjustment operation on the image subjected to noise suppression processing; S14, carrying out standardization processing on the image size, scaling the image to a specified width and height, and unifying the color channel arrangement sequence to obtain original image data; S15, carrying out data structure combination on the original image data and the original environment sensing data to form a data input set for uniformly coding the image and the environment data.
  3. 3. The intelligent hair quality identification blowing method based on deep learning according to claim 1, wherein the step S2 specifically comprises: s21, receiving a data input set, inputting an image part to a basic convolution extraction layer of a feature pyramid network, and extracting an initial feature map; S22, generating feature images with multiple scales through the continuously stacked downsampling convolution layers, sequentially corresponding to different resolution levels of an original image, and respectively reserving texture edge information and hair area structure information; S23, constructing an up-sampling path in the top-down direction, performing up-sampling treatment on the high-level semantic feature images, and performing transverse fusion with the bottom feature images with the same scale to generate a multi-scale fusion feature image group containing abundant upper and lower layer information; S24, introducing a bidirectional interaction mechanism to the scale fusion feature map group in each scale direction, wherein the bidirectional interaction mechanism specifically comprises the steps of executing pixel-by-pixel weighted superposition operation in a space dimension, executing channel attention adjustment operation in a channel dimension, and executing cross-layer feature alignment operation in a scale dimension to form inter-scale information sharing and enhancement; S25, carrying out unified size adjustment and channel compression processing on the multi-scale feature map after interaction is completed, and converting the multi-scale feature map into a uniform-dimension hair feature representation vector.
  4. 4. The intelligent hair quality identification blowing method based on deep learning according to claim 1, wherein the step S3 specifically comprises: s31, receiving a hair characteristic expression vector, taking each expression vector as an initial embedded vector of a node, calculating a node distance based on characteristic similarity among the nodes, and establishing an edge connection relation when the similarity meets a preset condition to construct a hair map structure, wherein the hair map structure consists of a node set, an edge set and edge connection weights; S32, inputting the hair quality diagram structure into a hair quality type recognition model based on a diagram structure attention mechanism and a hierarchical label perception structure, wherein the model comprises a multi-layer diagram attention network, each layer calculates attention coefficients based on a current node embedded vector and adjacent node embedded vectors thereof, carries out weighted aggregation on adjacent node characteristics according to edge connection weights, outputs updated node embedded vectors, and transmits local structure and semantic information in the diagram layer by layer; S33, introducing a global attention structure in an attention mechanism of the graph structure, and performing non-local feature fusion on embedded vectors among all nodes by constructing an attention map of a cross node to generate node embedded vectors containing global context information; S34, constructing a hierarchical label structure based on a predefined hair type label system, dividing labels into a plurality of hierarchical categories, including basic type labels, intermediate type labels and refinement type labels, respectively generating corresponding label embedding vectors, and forming a label embedding matrix according to label hierarchical organization; And S35, performing layer-by-layer matching on the node embedded vector output by the attention mechanism of the graph structure and the tag embedded matrix, performing basic type tag matching and obtaining a preliminary classification result, sequentially performing classification judgment on the middle layer and the refinement layer according to the hierarchical dependency relationship, and finally outputting a complete hair type recognition result.
  5. 5. The intelligent hair quality identification blowing method based on deep learning according to claim 1, wherein the step S4 specifically comprises: s41, collecting continuous image frames of a user hair area according to a set time interval in the process of executing hair drying control, constructing a time-ordered image frame sequence, and forming every two adjacent image frames into frame pairs to generate an image frame pair set; S42, carrying out size, color and format standardization processing on the image frame pair sets, and inputting the image frame pair sets into an optical flow estimation model, wherein the optical flow estimation model is a depth optical flow network constructed based on a convolution structure and is used for calculating pixel displacement relations between each pair of image frames and outputting a corresponding optical flow diagram, and the optical flow diagram is in a multi-channel form and comprises horizontal displacement components, vertical displacement components and pixel confidence information; S43, carrying out normalization processing on the optical flow graph, superposing the boundary information of the original image to construct a mask, limiting the motion analysis area to be a hair image area, and inhibiting background and non-target motion interference to generate a motion characteristic graph under structural constraint; S44, forming a motion feature map sequence by the motion feature maps generated by each frame pair according to time sequence, and inputting the motion feature map sequence into a global motion aggregation optical flow network, wherein the network comprises a trunk convolution extraction layer, a time context coding structure and a motion aggregation module, and is used for modeling the motion evolution feature of a hair region; S45, setting a multi-scale motion modeling structure in a global motion aggregation optical flow network, and performing convolution extraction, channel fusion and inter-frame alignment operation on motion feature maps under different time windows and spatial scales to generate a multi-dimensional motion representation tensor; s46, carrying out channel integration and target area positioning operation on the multidimensional motion representation tensor, extracting a dynamic change mode of a corresponding hair area, and outputting a drying dynamic change characteristic.
  6. 6. The intelligent hair quality identification blowing method based on deep learning according to claim 1, wherein the step S5 specifically comprises: s51, receiving a hair type identification result and a drying dynamic change characteristic, dividing a hair area into a plurality of non-overlapping subareas according to image space coordinates, constructing a node set by taking each subarea as a node, wherein each node comprises local humidity, brightness change, structural direction and motion amplitude characteristics, and constructing an edge set according to the spatial adjacency and motion similarity between the nodes to form a hair area graph structure; S52, inputting the hair region graph structure into an improved graph policy optimization network, wherein the improved graph policy optimization network comprises a graph convolution expression layer, a structure driving mechanism unit, a policy evaluation layer and a control parameter generation unit, and the graph convolution expression layer receives initial node characteristics and combines edge connection weight propagation state information to extract structure embedding characteristics; s53, introducing a label embedding vector corresponding to a hair type identification result into a structure driving mechanism unit, projecting the label embedding vector to the same dimension as the node characteristic, broadcasting the label embedding vector to all nodes in the whole graph through an adjacent matrix, and using the label embedding vector as a condition item for updating the node state, so that different hair types exert guiding effects on the control parameter generation direction; S54, setting an action evaluation function in a strategy evaluation layer, scoring control parameter configuration of a current graph structure according to the degree that the feature variation trend of the structure embedding of the nodes in the graph under continuous transmission rounds accords with a structure label, calculating an action score gradient, transmitting the action score gradient back to a graph convolution expression layer, and executing strategy optimization; S55, combining and generating control parameters of the current dry sending stage, including a wind speed adjusting value, a heating power value and an action duration, in a control parameter generating unit according to the updated node state, the structure embedding characteristics and the action scores; s56, repeatedly executing graph propagation, label guidance, strategy evaluation and parameter updating until the control parameter score converges or the maximum iteration number is reached, and finally outputting the optimal control parameter.
  7. 7. The intelligent hair quality identification blowing method based on deep learning according to claim 1, wherein the step S6 specifically comprises: S61, receiving optimal control parameters and taking the optimal control parameters as initial control instructions for controlling an execution stage; S62, setting a control execution structure in the air blowing equipment, wherein the control execution structure consists of a feedforward control channel and a feedback regulation channel, and the feedforward control channel sets initial output values of wind speed and heating power according to optimal control parameters; s63, collecting sensing data of the environment and the hair surface according to a set time interval in the hair drying control process, wherein the sensing data comprise the current environment temperature, the local humidity, the wind speed change rate and the hair surface thermal response value; s64, comparing the acquired sensing data with a set target in the optimal control parameters in real time, calculating a control error value, and determining a wind speed correction amount and a heating power correction amount according to the error value; s65, inputting the correction amount into a feedback adjusting channel, and adjusting the output of the current equipment to realize the real-time dynamic correction of the dry hair control state; and S66, repeatedly executing the processes of acquisition, comparison and correction in the whole control action time, and ending the current dry hair control operation when the control error is lower than a preset threshold or the maximum execution time is reached.
  8. 8. The intelligent hair quality identification blowing method based on deep learning according to claim 1, wherein the step S7 specifically comprises: S71, collecting image data and environment sensing data of the hair of a current user after dry hair control is executed, wherein the processing mode of the image data and the environment sensing data is consistent with a data structure adopted when a data input set is constructed in S1; S72, calling the original image data and the original environment sensing data of the S1, and comparing the original image data with the newly acquired image data and the environment sensing data in a one-to-one correspondence manner; S73, three image features of structural texture definition, surface gloss uniformity and color distribution uniformity are extracted from original image data and image data respectively, and three sensing features of local humidity reduction amplitude, temperature recovery speed and hair surface thermal response change are extracted from original environment sensing data and environment sensing data; and S74, normalizing the six characteristic differences, giving fixed weights, and executing weighted summation operation to generate a nursing improvement index.
  9. 9. The intelligent hair quality identification blowing method based on deep learning according to claim 1, wherein the step S8 specifically comprises: s81, comparing the nursing improvement index with a system preset hair drying effect threshold, and judging that the current hair drying strategy has a necessary adjustment when the nursing improvement index is lower than the set threshold; S82, constructing individual dry release as a recording unit based on the hair type identification result, the optimal control parameters and the nursing improvement index in the hair drying process of the round, marking as an effect non-ideal sample, and writing into an experience database; s83, in the subsequent hair drying process, the improved graph strategy optimization network calls historical samples with the same type of hair quality and close nursing improvement indexes in the experience database, carries out dynamic weight adjustment on a control parameter generation path, and preferentially guides the strategy to be close to the historical samples with excellent effects; S84, in the strategy optimization process, if a plurality of low improvement index records exist continuously under the same hair type, the system automatically adjusts label embedding and node transmission weights, and the sensitivity of a structure driving mechanism to personalized differences is enhanced; S85, if the nursing improvement index is higher than a preset threshold, judging that the hair drying strategy is effective, and marking the current control parameter configuration and scoring record as a forward sample; S86, the updated experience data structure and the policy fine tuning path are used as input and transmitted back to the improved graph policy optimization network described in S5, wherein the forward sample is used for initializing node label embedding of similar hair types, guiding control parameter generation paths and adjusting policy rewarding function scoring logic, and personalized path updating and hair drying control performance improvement based on excellent history policies are achieved.

Description

Intelligent hair identification blowing method based on deep learning Technical Field The invention relates to the technical field of artificial intelligence and intelligent personal care, in particular to an intelligent hair identification blowing method based on deep learning. Background Along with the rapid development of intelligent home and personalized nursing equipment, intelligent blowing products gradually evolve from a traditional constant temperature and constant wind speed mode to an automatic identification and self-adaptive adjustment direction. The existing intelligent blowing device mainly relies on simple physical sensing means such as an infrared temperature sensor, a humidity sensor and the like, and carries out rough adjustment of wind speed or heating power by detecting the change of the surface temperature or the humidity of hair, but the following problems generally exist in actual use: The current system lacks deep modeling capability for the dynamic process of user hair types, textures and drying behaviors, cannot accurately judge the response states of different hairs in the drying process, so that the drying strategies are difficult to customize individually, the feature extraction method is mainly based on local image edge detection or single-scale texture analysis, cannot effectively capture multi-scale structural features and luster change features to influence the accuracy of hair identification, the drying control process relies on simple closed-loop temperature control logic, lacks modeling support for dynamic changes of hair areas, is difficult to adjust wind power and heat distribution in real time, and the conventional control strategy generally lacks a strategy optimization mechanism driven by data, so that iterative improvement and personalized adaptation of the drying process cannot be realized according to historical feedback. Therefore, how to provide an intelligent hair recognition blowing method based on deep learning is a problem to be solved by those skilled in the art. Disclosure of Invention The invention aims to provide an intelligent hair quality identification blowing method based on deep learning, which fully integrates artificial intelligent technologies such as a feature pyramid network, a multi-scale feature bidirectional interaction mechanism, a graph structure attention mechanism, optical flow estimation, graph strategy optimization, feedforward-feedback control and the like, and the system realizes hair quality feature extraction, type identification, dynamic analysis in a drying process, optimal control parameter generation and nursing effect feedback update and has the advantages of high identification precision, intelligent control response, quantifiable nursing effect and self-adaptive strategy optimization. According to the embodiment of the invention, the intelligent hair quality identification blowing method based on deep learning comprises the following steps of: S1, acquiring user hair images and environment sensing data, denoising, enhancing and standardizing the image data, and constructing a data input set; S2, inputting a data input set into a feature pyramid network, introducing a multi-scale feature bidirectional interaction mechanism, extracting texture features, luster features and structural features under different scales, and generating a uniform-dimension hair feature representation vector; S3, inputting the hair characteristic expression vector into a hair type recognition model based on a picture structure attention mechanism and a hierarchical label perception structure, constructing a picture structure with nodes as hair samples and edges as characteristic similarity, and outputting a hair type recognition result through the label perception mechanism; s4, collecting continuous image frames in the blowing process, constructing an optical flow estimation model between image frame pairs, and extracting the dry dynamic change characteristics of the hair region through a global motion aggregation optical flow network; S5, combining the hair type recognition result and the drying dynamic change characteristic, constructing a hair region graph structure, inputting an improved graph strategy optimization network, executing multi-round control parameter generation and strategy updating operation, and finally outputting optimal control parameters; S6, dry hair control operation is executed according to the optimal control parameters, the wind speed and the heating power are adjusted in real time by adopting a feedforward-feedback control structure, and dynamic correction is carried out by combining sensor data; S7, acquiring images and sensing data after hair drying is completed, and calculating a nursing improvement index based on the change amplitude of key features before and after hair drying; and S8, generating feedback information according to the nursing improvement index, recording the current hair quality state, the control pa