Search

CN-116434307-B - Micro-expression feature extraction method based on motion unit prototype template

CN116434307BCN 116434307 BCN116434307 BCN 116434307BCN-116434307-B

Abstract

The invention discloses a micro-expression feature extraction method based on a prototype template of a motion unit, which comprises the steps of aiming at the facial difference problem among different tested and the head translation problem of the same tested in a video in preprocessing, using a facial position correction and clipping method based on organs to ensure the accuracy of optical flow feature extraction, and providing the prototype template facing the motion unit for accurately analyzing the motion unit in the micro-expression. In order to accurately capture micro-expression actions, matching an optical flow chart sequence of the micro-expression video with the AU motion template to obtain deep analysis of complex micro-expression actions in the video, and improving accuracy and interpretability of micro-expression characteristics. The invention has the advantages that the invention can effectively capture the fine facial motion when the micro-expression occurs, and can be used for detecting, identifying and generating the micro-expression.

Inventors

  • LI HAIFENG
  • HE YUHONG
  • XU ZHONGLIANG
  • MA LIN

Assignees

  • 哈尔滨工业大学

Dates

Publication Date
20260512
Application Date
20230418

Claims (2)

  1. 1. The microexpressive feature extraction method based on the prototype template of the motion unit is characterized by comprising the following steps of: Step 1, selecting an initial frame and a peak frame in a micro-expression video aiming at a training set, wherein the initial frame is a first frame for starting micro-expression, and the peak frame is a frame with the largest micro-expression action amplitude and represents the largest motion information; Step 2, positioning the coordinates of the feature points of the initial frame, and selecting nose tip coordinate points from the coordinates =( ) Left lateral corner of the eye coordinate point =( ) Right outer corner of eye coordinate point =( ) And left side mouth corner Right side mouth corner ; Step 3, correcting the position of the characteristic point of the peak frame according to the nose tip region optical flow in the initial frame and the peak frame; the specific substeps of step 3 are as follows: Step 3-1, according to the nose tip coordinate point of the initial frame =( ) Nasal tip region frame for determining initial and peak frames And ; Step 3-2, calculating dense optical flow of nose tip areas in an initial frame and a peak frame in an image sequence by using Farneback optical flow method, wherein the dense optical flow is represented by the following formula: , Obtaining an optical flow matrix , And The width and length of the nose tip region respectively; step 3-3, calculating the optical flow matrix of the nasal tip region Mean of (2) To measure facial position change between two frames, wherein Representing a movement in a horizontal direction and, Representing movement in a vertical direction; , Step 3-4. According to the degree of offset of the initial frame and the peak frame obtained in step 3-3 Calculating the characteristic point coordinates of the peak frame by the characteristic point coordinates of the initial frame; , , , , Step 4, cutting out an eye area according to the characteristic points of the corners of eyes; Step 5, calculating a dense light flow diagram of the eye and mouth area between the initial frame and the peak frame in the micro-expression video i by using a dense light flow method, and marking the dense light flow diagram as And The formula is as follows: , , Step 6, performing superposition average on the optical flow diagrams of the micro-expression videos with the same motion units to obtain a prototype template of the motion units Which represents the basic motion pattern of the corresponding motion unit when the micro-expression occurs, j is the index of the motion unit, Is the collection of motion units that occur in the eye region, Is a collection of motion units occurring in the mouth region; , is a set of microexpressive video indexes that contains a motion unit j; after calculating all the prototype templates of the motion units, a set of motion unit templates is obtained V is the number of motion units; Step 7, aiming at the test set, after the pretreatment in steps 2,3 and 4, calculating a dense light flow diagram of the eyes and mouth area between two micro-expression pictures by using a dense light flow method, and recording as And ; Step 8, calculating a light flow diagram and a prototype template of the motion unit Matching degree of (2) ; , , Representing a dataflow graph and prototype templates W and h are matrices Is the width and length of (a); Step 9, after matching with all the motion unit prototype templates, obtaining micro expression features based on the motion unit templates 。
  2. 2. The method for extracting the micro-expression features based on the prototype template of the motion unit according to claim 1, wherein the specific substep of the step 4 is as follows: step 4-1 based on left lateral corner of eye coordinate points =( ) Right outer corner of eye coordinate point =( ) Calculating a cutting frame of the eye region, the upper left corner of the cutting frame =( ) And lower right corner =( ) Obtaining a first frame of eye area ; The formula is as follows: , , , Step 4-2 based on left hand corner of mouth Right side mouth corner Calculating a cutting frame of the mouth area, and the upper left corner of the cutting frame =( ) And lower right corner =( ) Obtaining a first frame mouth region ; The formula is as follows: , 。

Description

Micro-expression feature extraction method based on motion unit prototype template Technical Field The invention relates to the technical field of microexpressive feature extraction, in particular to a microexpressive feature extraction method based on a prototype template of a motion unit. Background Expression is an important way of human emotion interaction. The expressions can be classified into macro-expressions and micro-expressions according to the duration and intensity of the facial emotional state changes. The duration of the macro expression is 0.5s-4s, accompanied by a drastic change in facial muscles. A microexpressive expression is a rapid expression that is not autonomous, and the duration of microexpressions is typically less than 0.5s. Neurophysiologic research shows that the microexpressions are hardly controlled by subjective consciousness, are true emotion exposure of human beings, and are windows for researching the cognitive psychology and the nerve reflex of the human beings. The micro-expression action is very tiny, the duration time is short, and the micro-expression action is influenced by a plurality of factors such as head shaking, blinking, facial differences of detected persons and the like, so that the micro-expression detection and recognition accuracy is not high all the time. How to eliminate irrelevant disturbances and accurately capture the occurrence of micro-expressions is a very challenging task. The existing micro-expression feature extraction technology has two types of traditional manual features and depth features, wherein the traditional manual features comprise texture features, optical flow features and the like. The feature features have high universality in the field of image processing, but have poor capability of detecting the difference of facial changes, and are difficult to reflect dynamic evolution information of micro expressions. The optical flow characteristics have the technical characteristics of describing object motion information, have good robustness on facial differences, are very suitable for detecting micro-expressions and are a fine action, and researchers take the optical flow method as an important method in micro-expression research. However, when extracting optical flow features, one needs to set facial regions related to micro-expressive motion empirically, which is often not optimal. In recent years, deep learning has been successful on macro expressions, but because the data volume of the existing micro expression database is small, the use requirement of a deep learning model is difficult to meet, and problems such as overfitting often occur. In addition, deep learning-based micro-expression feature extraction methods often lack explicit basis and interpretable results. Under the application scenes of abnormal behavior detection, abnormal emotion monitoring and the like, the micro-expression analysis result without clear basis and clear process is difficult to be trusted. Ekman the concept of a facial motion unit (AU) was first proposed in a facial coding system. The facial motion unit is a basic facial motion that can be combined into various facial expressions. When a microexpression occurs, one or more movement units occur. Facial motion units are powerful tools for analyzing micro-expressions and therefore also become a hotspot problem in the field of micro-expression research. Each AU is produced by a fixed facial muscle traction, so that when the same motion unit occurs, a similar pattern of motion of the face occurs. However, due to natural facial differences, there may be some difference in facial movements when different subjects produce the same AU. Even the same subject will have significantly different magnitudes of facial movements belonging to the same AU due to different intensity of emotion. This presents difficulties for microexpressions AU recognition. Therefore, the design can accurately acquire the AU motion mode, and the method with robustness to facial motion recognition belonging to the same AU is very important, which is not only helpful for improving the effect of micro-expression detection, but also has great significance to the follow-up more accurate and interpretable micro-expression modeling. Disclosure of Invention Aiming at the defects of the prior art, the invention provides a microexpressive feature extraction method based on a prototype template of a motion unit. In order to achieve the above object, the present invention adopts the following technical scheme: a microexpressive feature extraction method based on a prototype template of a motion unit comprises the following steps: and 1, selecting an initial frame and a peak frame in the micro-expression video aiming at the training set. The initial frame is the first frame of the beginning of the micro-expression, the peak frame is the frame with the largest micro-expression action amplitude, and the maximum motion information is represented. Step 2