CN-121982765-A - Emotion recognition method, electronic device and storage medium
Abstract
The embodiment of the application provides a mood recognition method, electronic equipment and a storage medium, and belongs to the technical field of artificial intelligence. The method comprises the step of evaluating the power level of the preset electronic equipment according to the frequency of the processor. The original emotion recognition model is segmented into intermediate model segments. And calculating the target sparsity of each intermediate model segment according to the calculation force level, and compressing the intermediate model segments according to the target sparsity to obtain the compression model segments. And grouping the compressed network weights according to the processing bit width of the processor to obtain weight groups, and packaging the weight groups into target model segments. Determining a current model segment from each target model segment, loading the current model segment into a shared memory of the preset electronic equipment, carrying out emotion recognition based on a preset face image through the current model segment, and eliminating the last model segment loaded into the shared memory from the shared memory. The application can be compatible with electronic equipment with different hardware configurations so as to realize efficient local operation of emotion recognition.
Inventors
- SU YIDAN
- XU HUALIN
- ZHAO KAI
- LIANG JING
- SUN JIE
Assignees
- 深圳市深圳通有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20260407
Claims (10)
- 1. A method of emotion recognition, the method comprising: acquiring an original emotion recognition model, and presetting the processor frequency and the processor processing bit width of the electronic equipment; performing calculation power evaluation on the preset electronic equipment according to the processor frequency to obtain calculation power grade of the preset electronic equipment; performing model segmentation on the original emotion recognition model to obtain at least two middle model segments; calculating a target sparsity of each intermediate model segment according to the calculation force level; Performing model compression on the middle model segments according to the target sparsity to obtain compression model segments for each middle model segment, wherein the compression model segments have compression network weights; for each compression model segment, grouping the compression network weights according to the processing bit width of the processor to obtain weight groups, and packaging the weight groups into target model segments; Determining a current model segment from each target model segment, loading the current model segment into a shared memory of the preset electronic equipment, carrying out emotion recognition on the basis of a preset face image through the current model segment positioned in the shared memory, and eliminating the last model segment loaded into the shared memory from the shared memory in the emotion recognition process.
- 2. The method of claim 1, wherein grouping the compressed network weights according to the processor processing bit width results in a weight grouping comprising: determining a reference weight quantity value according to the processing bit width of the processor, and grouping the compressed network weights based on the reference weight quantity value to obtain a candidate weight group; Calculating a quantization threshold according to the maximum value, the minimum value and the reference weight quantity value in each candidate weight group; and linearly quantizing the candidate weight group according to the quantization threshold value to obtain the weight group.
- 3. The method of claim 2, wherein said determining a reference weight number value based on said processor processing bit width comprises: acquiring the battery residual capacity and the running temperature of a processor of the preset electronic equipment; Determining a data bit width according to the residual electric quantity of the battery and the running temperature of the processor; the reference weight number value is calculated from the processor processing bit width and the data bit width.
- 4. The method of claim 1, wherein the intermediate model segment comprises a plurality of output channels, each of the output channels having a channel network weight, the model compressing the intermediate model segment according to the target sparsity ratio to obtain a compressed model segment, comprising: calculating importance scores of the output channels according to the channel network weights for each output channel; and carrying out channel pruning on the middle model segment according to the target sparsity and the importance score to obtain the compression model segment.
- 5. The method of claim 4, wherein said performing channel pruning on said intermediate model segment according to said target sparsity and said importance scores to obtain said compressed model segment comprises: Calculating the channel reserved quantity according to the target sparsity and the channel quantity of the output channels; screening the output channels according to the channel reserved quantity and the importance score to obtain target channels; and sequentially identifying the target channels, and determining the compression model segment based on the target channels after the sequential identification.
- 6. The method according to any one of claims 1 to 5, wherein said calculating a target sparsity for each of said intermediate model segments from said computational effort levels comprises: Obtaining a reference sparsity of each intermediate model segment; acquiring a calculation force adjusting factor according to the calculation force grade; calculating an attenuation factor of the reference sparsity according to the calculated force adjustment factor and the calculated force level; and multiplying the attenuation factors and the reference sparsity of the intermediate model segments by aiming at each intermediate model segment to obtain the target sparsity of the intermediate model segments.
- 7. The method according to any one of claims 1 to 5, wherein the performing the computing power evaluation on the preset electronic device according to the processor frequency to obtain the computing power class of the preset electronic device includes: measuring the speed of a processor of the preset electronic equipment based on a preset convolution operator to obtain the throughput of the processor; Performing calculation power evaluation according to the processor frequency and the throughput to obtain calculation power evaluation scores; And inquiring a preset score grade mapping relation according to the calculation power evaluation score to obtain the calculation power grade.
- 8. The method of claim 7, wherein after said computing power assessment based on said processor frequency and said throughput, said method further comprises: Obtaining the processor model of the preset electronic equipment; inquiring a preset model level mapping relation according to the model of the processor to obtain a reference level; And updating the computing power evaluation score according to the reference grade.
- 9. An electronic device comprising a memory storing a computer program and a processor implementing the method of any of claims 1 to 8 when the computer program is executed by the processor.
- 10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method of any one of claims 1 to 8.
Description
Emotion recognition method, electronic device and storage medium Technical Field The present application relates to the field of artificial intelligence technologies, and in particular, to a method for identifying emotion, an electronic device, and a storage medium. Background To accommodate the intelligent demands of users on electronic devices, most mobile applications implement facial image-based emotion recognition by loading artificial intelligence models to implement different intelligent functions based on the user's emotion (e.g., generating emotion-related audio, abnormal emotion pre-warning, and emotion-related image processing). However, when the same emotion recognition model is run on electronic devices of different hardware configurations, there is a difference in demand for computing resources and storage resources. For electronic devices with low hardware configurations, the installation package volume of some models may be too large, resulting in failure to complete the installation, or difficulty in operating properly even if the installation is completed. In the related art, an emotion recognition model is typically deployed at the cloud, but this approach may reveal user privacy. Therefore, how to be compatible with electronic devices with different hardware configurations so as to realize efficient local operation of emotion recognition becomes a technical problem to be solved. Disclosure of Invention The embodiment of the application mainly aims to provide an emotion recognition method, electronic equipment and a storage medium, which are used for being compatible with electronic equipment with different hardware configurations so as to realize efficient local operation of emotion recognition. To achieve the above object, a first aspect of an embodiment of the present application provides an emotion recognition method, including: acquiring an original emotion recognition model, and presetting the processor frequency and the processor processing bit width of the electronic equipment; performing calculation power evaluation on the preset electronic equipment according to the processor frequency to obtain calculation power grade of the preset electronic equipment; performing model segmentation on the original emotion recognition model to obtain at least two middle model segments; calculating a target sparsity of each intermediate model segment according to the calculation force level; Performing model compression on the middle model segments according to the target sparsity to obtain compression model segments for each middle model segment, wherein the compression model segments have compression network weights; for each compression model segment, grouping the compression network weights according to the processing bit width of the processor to obtain weight groups, and packaging the weight groups into target model segments; Determining a current model segment from each target model segment, loading the current model segment into a shared memory of the preset electronic equipment, carrying out emotion recognition on the basis of a preset face image through the current model segment positioned in the shared memory, and eliminating the last model segment loaded into the shared memory from the shared memory in the emotion recognition process. In some embodiments, the grouping the compressed network weights according to the processor processing bit width, resulting in a weight group, includes: determining a reference weight quantity value according to the processing bit width of the processor, and grouping the compressed network weights based on the reference weight quantity value to obtain a candidate weight group; Calculating a quantization threshold according to the maximum value, the minimum value and the reference weight quantity value in each candidate weight group; and linearly quantizing the candidate weight group according to the quantization threshold value to obtain the weight group. In some embodiments, the determining a reference weight number value from the processor processing bit width comprises: acquiring the battery residual capacity and the running temperature of a processor of the preset electronic equipment; Determining a data bit width according to the residual electric quantity of the battery and the running temperature of the processor; the reference weight number value is calculated from the processor processing bit width and the data bit width. In some embodiments, the intermediate model segment includes a plurality of output channels, each of the output channels having a channel network weight, and the model compressing the intermediate model segment according to the target sparsity ratio to obtain a compressed model segment includes: calculating importance scores of the output channels according to the channel network weights for each output channel; and carrying out channel pruning on the middle model segment according to the target sparsity and the importance sco