Search

CN-121979397-A - Smart watch interaction method and system based on gesture actions

CN121979397ACN 121979397 ACN121979397 ACN 121979397ACN-121979397-A

Abstract

The invention relates to the field of electronic intelligent watches, and particularly discloses an intelligent watch interaction method and system based on gesture actions, wherein an eye image is cut out from a face image, and a pupil area is separated according to gray level distribution of the eye image; the method comprises the steps of calculating pupil center coordinates, constructing an eye coordinate system by taking the midpoint of an inner eye corner connecting line as an origin, taking the outer eye corner pointed by the inner eye corner as an X axis and the midpoint of the upper eyelid pointed by the midpoint of the lower eyelid as a Y axis, mapping the pupil center coordinates to the eye coordinate system, calculating the offset of the pupil relative to the origin, judging the sight line direction according to the offset and an offset threshold value, judging the operation direction of an operation gesture according to the sight line direction, correlating and structurally integrating the operation direction and the operation gesture according to a differentiated mapping rule to form an operation gesture comprising an operation type and an operation direction, and acquiring an operation instruction corresponding to the operation gesture based on an operation gesture and interaction instruction mapping relation database. In this way, the user interaction experience can be improved.

Inventors

  • HU CHUANLIN
  • ZHANG WEN

Assignees

  • 重庆舟海智能科技股份有限公司

Dates

Publication Date
20260505
Application Date
20260402

Claims (10)

  1. 1. The intelligent watch interaction method based on gesture actions is characterized by comprising the following steps: Step 1, keeping a screen-off standby state, and after receiving a preset wake-up gesture input by a user, entering a preset wake-up mode by the intelligent watch, or entering the preset wake-up mode after meeting preset conditions; Step 2, after the state mode information is clarified, collecting operation gestures input by a user, and judging whether the operation gestures are matched with wake-up gestures or not according to preset judging rules: If so, performing step 4; if the facial features and the expression features do not match, acquiring facial images of the user, extracting eye features and expression features from the facial images, judging whether the eye features meet preset watching conditions and whether the expression features meet preset non-misoperation conditions, if so, performing step 3, otherwise, returning to step 1; Step 3, determining the sight line direction of the user according to the face image, namely cutting out an eye image from the face image, separating a pupil area according to the gray level distribution of the eye image, and calculating the center coordinates of the pupil ) Identifying inner canthus, outer canthus, upper eyelid midpoint and lower eyelid midpoint, taking the midpoint of the connecting line of inner canthus as origin ) The inner canthus points to the outer canthus as X axis, the middle point of the lower eyelid points to the middle point of the upper eyelid as Y axis, and the eye coordinate system is built, the pupil center coordinate is calculated ) Mapping to an eye coordinate system, calculating the relative origin of the pupil ) Offset of [ (] ) According to the offset ) And offset threshold value ) Judging the direction of the line of sight; Judging the operation direction of the operation gesture according to the sight direction, and associating and structuring the operation direction with the operation gesture according to the differentiated mapping rule to form the operation gesture comprising the operation type and the operation direction; step 4, acquiring an operation instruction corresponding to the operation gesture based on the operation gesture and interaction instruction mapping relation database; And step 5, responding to the operation instruction, and executing the corresponding interaction action.
  2. 2. The intelligent watch interaction method based on gesture actions according to claim 1 is characterized in that in step 3, an eye image is cut out from a face image, a pupil area is separated according to gray distribution of the eye image, the method comprises the steps of carrying out graying processing on the collected face image to obtain a gray image, carrying out noise reduction processing on the gray image by adopting a Gaussian filter algorithm, detecting eye key points in the gray image based on MTCNN neural networks, wherein the eye key points comprise inner eye corners, outer eye corners, upper eyelid edge points and lower eyelid edge points, determining boundary coordinates of the eye area according to the eye key points, cutting out the eye image according to the boundary coordinates, calculating a gray histogram of the eye image, determining a bimodal threshold of the gray histogram, wherein the bimodal threshold is a gray value corresponding to a valley value between two peaks in the gray histogram, adopting an adaptive threshold segmentation algorithm, adjusting the segmentation threshold by taking a gray average value of the eye image as a reference, judging the area of the gray value in the eye image is a pupil candidate area by combining with the gray value of the adjusted segmentation threshold, carrying out morphological closing operation processing on the pupil candidate area, filling the candidate area, detecting contour in the candidate area is a contour detection area, and then obtaining a circular contour candidate area by a preset contour detection algorithm, namely a circular candidate area is obtained by selecting a circular contour candidate area.
  3. 3. The intelligent watch interaction method based on gesture actions according to claim 2, wherein in step 3, pupil center coordinates are calculated [ ] ) The method for identifying the inner canthus, the outer canthus, the midpoint of the upper eyelid and the midpoint of the lower eyelid comprises calculating the center coordinates of the pupil area by adopting a gravity center method, wherein the formula is as follows Wherein, the Is the coordinates of the pixel points in the pupil area, The method comprises the steps of obtaining gray values of pixel points, detecting key points of cut eye images, outputting key point coordinates, calculating coordinate mean values of the detected inner eye corner feature points and outer eye corner feature points to obtain double-eye inner eye corner coordinates, wherein m and n are the number of rows and columns of pupil areas respectively for the gray values of the pixel points, the key points comprise inner eye corner feature points, outer eye corner feature points, upper eyelid edge points and lower eyelid edge points 、 And the coordinates of the outer and inner corners of the eyes 、 Fitting the upper eyelid edge points to obtain an upper eyelid contour curve, and taking the coordinates of the upper eyelid edge points as the midpoints of the upper eyelid Fitting the edge points of the lower eyelid to obtain a lower eyelid contour curve, and taking the coordinates of the middle points as the midpoints of the lower eyelid 。
  4. 4. The intelligent watch interaction method based on gesture actions according to claim 3, wherein in step 3, pupil center coordinates are calculated as [ ] ) Mapping to an eye coordinate system, calculating the relative origin of the pupil ) Offset of [ (] ) Comprises the following steps of according to the coordinates of the angles of eyes in eyes 、 Calculating the origin of the eye coordinate system The formula is as follows According to pupil center coordinates Calculating the transverse offset of the pupil relative to the origin And longitudinal offset The formula is as follows For the calculated And And performing smoothing processing, and taking the average value of the first continuous preset frame offset as a final offset result by adopting a sliding window filtering algorithm.
  5. 5. The intelligent watch interaction method based on gesture actions according to claim 4, wherein in step 3, according to the offset # ) And offset threshold value ) The method for determining the direction of the line of sight comprises calculating the horizontal distance between the inner and outer corners of eyes Calculating the vertical distance between the midpoint of the upper eyelid and the midpoint of the lower eyelid At a horizontal distance And vertical spacing For reference, adaptively setting an offset threshold And (3) with , , wherein, 、 Setting a line-of-sight direction determination rule based on the offset (Deltax, deltay): If the second continuous preset frame number meets Judging to look left; If the second continuous preset frame number meets Judging to look right; If the second continuous preset frame number meets Judging to look down; If the second continuous preset frame number meets Judging to look upwards; If the second continuous preset frame number meets And is also provided with Judging that the sight is right opposite to the plane of the intelligent watch; If the line-of-sight direction switching is detected, the second continuous preset frame number is re-set, and after the second continuous preset frame number is reached, the line-of-sight direction judging result is updated.
  6. 6. The intelligent watch interaction method based on gesture actions according to claim 5 is characterized in that in step 3, the operation direction of the operation gesture is judged according to the sight direction, the operation direction and the operation gesture are associated and integrated in a structuring mode according to a differential mapping rule to form the operation gesture comprising operation types and operation directions, the intelligent watch interaction method comprises the steps of presetting a differential mapping rule base, classifying the operation types according to the operation types, corresponding to exclusive mapping logic, extracting type identifiers and action parameters of the operation gesture, wherein the action parameters comprise duration and action amplitude, matching the corresponding differential mapping rule from the mapping rule base based on the type identifiers, determining the operation direction in combination with the sight direction, constructing a structuring data model of the operation gesture, wherein the model comprises three core fields of the operation types, the operation directions and the action parameters, filling the type identifiers, the operation directions and the action parameters of the operation gesture into corresponding fields of the structuring data model, and generating the operation gesture.
  7. 7. The intelligent watch interaction method based on gesture actions according to claim 6 is characterized in that in step 4, based on an operation gesture and interaction instruction mapping relation database, an operation instruction corresponding to an operation gesture is obtained, and the intelligent watch interaction method based on gesture actions comprises the steps of constructing an interaction instruction mapping relation database, wherein the database comprises a basic gesture subset and an accurate gesture subset, wherein the basic gesture subset stores the corresponding relation between the operation gesture and a wake-up gesture, and the wake-up mode is used as a classification index; If the wake mode is triggered, extracting a state mode and an operation type of an operation gesture of the intelligent watch, inquiring a basic gesture subset by taking the state mode and the operation type as search keys, and acquiring a corresponding basic operation instruction; If the operation gesture is triggered, extracting an operation type, an operation direction and action parameters in the operation gesture structured data model, inquiring an accurate gesture subset by taking the three as a combined search key, and obtaining a corresponding accurate operation instruction.
  8. 8. The intelligent watch interaction method based on gesture actions according to claim 7 is characterized in that in step 2, whether an operation gesture is matched with a wake gesture or not is judged according to a preset judging rule, the method comprises the steps of constructing a mapping relation table of the operation gesture and the wake gesture, determining wake gestures and gesture characteristic parameters corresponding to different state modes, inquiring wake gestures and characteristic parameter thresholds corresponding to the state modes from the mapping relation table, collecting real-time characteristic parameters of the operation gesture, comparing the real-time characteristic parameters with the inquired preset characteristic parameter thresholds, judging that the operation gesture is matched with the wake gesture if all the real-time characteristic parameters are within a preset threshold range, and judging that the operation gesture is not matched if any real-time characteristic parameter exceeds the preset threshold range.
  9. 9. The smart watch interaction method based on gesture according to claim 8, wherein in step 2, extracting the eye feature and the expression feature from the facial image, determining whether the eye feature satisfies a preset gazing condition and whether the expression feature satisfies a preset non-misoperation condition includes: extracting eye characteristics, and calculating pupil fixation time length, eyelid opening and closing degree and eyeball rotation frequency according to eye key points; Extracting expression features, extracting facial key points by adopting Dlib face feature point detection algorithm, and calculating the eyebrow offset, the mouth angle lifting angle and the lip closure degree according to the facial key points; The preset gazing condition is that pupil gazing duration is more than or equal to preset gazing duration, eyelid opening and closing degree is more than or equal to preset opening and closing degree, and eyeball rotation frequency is less than or equal to preset rotation frequency; The preset non-misoperation condition is that the eyebrow offset is less than or equal to the preset offset, the mouth angle lifting angle is more than or equal to the preset lifting angle, and the lip closing degree is more than or equal to the preset closing degree; If the eye feature meets the preset watching condition and the expression feature meets the preset non-misoperation condition, judging that the eye feature meets the condition, and if any eye feature parameter does not meet the preset watching condition or any expression feature parameter does not meet the preset non-misoperation condition, judging that the eye feature parameter does not meet the condition.
  10. 10. Smart watch interaction system based on gesture actions, characterized by being adapted to perform the method of any of claims 1-9.

Description

Smart watch interaction method and system based on gesture actions Technical Field The invention relates to the technical field of electronic intelligent watches, in particular to an intelligent watch interaction method and system based on gesture actions. Background With the rapid development of wearable equipment technology, touch screen wearable equipment such as a smart watch and the like has become an important carrier for man-machine interaction by virtue of portability advantages. However, the existing touch screen wearable device is limited in screen size, so that fingers can easily shield display content during touch operation, the requirement on the fineness of a touch position is extremely high, and the operation difficulty is high. In this regard, the chinese patent application CN105094675A provides an optimization scheme, first, obtains the hand operation of the user on the operation plane of the smart watch (the plane that is not on the same horizontal plane as the display plane of the smart watch but is an enlarged plane), and then obtains the machine operation instruction corresponding to the hand operation and responds to the machine operation instruction. Therefore, the area of the plane where the object is located is far larger than the area of a touch operation interface of the touch screen wearable device, when a user performs man-machine interaction on the plane of the intelligent watch, shielding of hands caused by small screen of the touch screen wearable device to touch operation can be avoided, and finger operation action precision is improved. But the display mode of the intelligent watch plane and the display mode of the screen are in the same direction compared with the display mode of the user, namely, the sight of the user needs to pass through the intelligent watch plane to see the content on the screen, or the intelligent watch plane is projected onto a certain wall surface through a projection technology. However, in this operation mode, the sight is easy to be affected during operation, or one hand of the user is required to be kept stable in front of the body, and the other hand of the user passes through the plane of the smart watch to operate, so that the user experience is low. In this technology, in order to improve the interactive experience of a user when operating a smart watch, chinese patent application CN115509358a discloses a gesture interaction method, according to the first finger action of playing a virtual instrument by recognizing that a user wears a wearable device, playing a sound corresponding to the first playing unit according to the first playing unit of the virtual instrument corresponding to the first finger action, displaying a first effect that the first playing unit is played, and by comparing the tone and the beat played by the user with the standard tone and the beat, recognizing the playing error of the user and prompting the user, realizing the playing interaction with the virtual instrument, simulating the playing scene of real music, and improving the interactive experience of the user. The applicant is about to transfer the technology to daily interaction of the intelligent watch, but after intensive research, the applicant discovers that the whole operation logic of the intelligent watch is complex, and the intelligent watch generally needs to start various instructions such as application, switching functions, parameter adjustment, mode selection and the like, and the existing general gesture interaction mode has the defects of complex interaction instructions, misidentification caused by confusion with day show behaviors and the like. Based on the technical scheme, the existing virtual instrument playing-related finger action and wrist action recognition or other single instruction recognition can be realized, and in the actual use process, if the technology is still adopted to control through gestures, the defect of poor interaction effect exists. Therefore, a method and a system for smart watch interaction capable of improving user interaction experience are needed. Disclosure of Invention The invention provides an intelligent watch interaction method and system based on gesture actions, which can improve user interaction experience. In order to solve the technical problems, the application provides the following technical scheme: the intelligent watch interaction method based on gesture actions comprises the following steps: Step 1, keeping a screen-off standby state, and after receiving a preset wake-up gesture input by a user, entering a preset wake-up mode by the intelligent watch, or entering the preset wake-up mode after meeting preset conditions; Step 2, after the state mode information is clarified, collecting operation gestures input by a user, and judging whether the operation gestures are matched with wake-up gestures or not according to preset judging rules: If so, performing step 4; if the facial features and the express