Search

CN-121995822-A - Multi-position expression cooperative control method, device, system, equipment and product of anthropomorphic robot

CN121995822ACN 121995822 ACN121995822 ACN 121995822ACN-121995822-A

Abstract

The invention discloses a multi-position expression cooperative control method, device, system, equipment and product of an anthropomorphic robot, and relates to the technical field of bionic people. The method comprises the steps of firstly obtaining a target expression control signal, then responding to the signal to obtain silence delay parameters which correspond to at least two expression execution mechanisms one by one and are independent of each other, then synchronously starting a plurality of silence delays based on the parameters, and finally respectively driving the corresponding execution mechanisms when each delay is finished, so that the problems of 'programmed sequential feeling' and 'mechanical cooperation' caused by asynchronous distribution of multi-part action starting instructions in the prior art are solved, the biological characteristics of 'synchronous response and asynchronous execution' of human facial nerve signals are effectively simulated, the naturalness and the life feeling of the robot expression are remarkably improved, and the method is convenient for practical application and popularization.

Inventors

  • HU CHENXU
  • JIANG ZHEYUAN
  • WANG CHENG

Assignees

  • 北京松延动力科技集团股份有限公司

Dates

Publication Date
20260508
Application Date
20260108

Claims (10)

  1. 1. A multi-position expression cooperative control method of an anthropomorphic robot is characterized by comprising the following steps: Acquiring a target expression control signal; Responding to the target expression control signal in the following manner: acquiring at least two silence delay parameters which are in one-to-one correspondence with at least two expression executing mechanisms of the anthropomorphic robot and are mutually independent; Synchronously starting at least two silence delays which are respectively based on the at least two silence delay parameters and correspond to the at least two expression executing mechanisms one by one; For each of the at least two silence delays, triggering and driving the corresponding expression executing mechanism when the corresponding delay is over.
  2. 2. The method according to claim 1, wherein the target expression control signal includes an instruction packet from a network or an external control terminal for indicating to present a target expression or real expression video data from a camera for indicating to reproduce the target expression.
  3. 3. The method according to claim 1, wherein when the target expression control signal is an instruction data packet from a network or an external control terminal and used for indicating to present a target expression, obtaining at least two silence delay parameters corresponding to at least two expression execution mechanisms one to one and independent of each other, the method comprises: and analyzing at least two silence delay parameters which are in one-to-one correspondence with at least two expression executing mechanisms and are mutually independent from the instruction data packet.
  4. 4. The method according to claim 1, wherein when the target expression control signal is real expression video data from a camera for indicating reproduction of a target expression, obtaining at least two silence delay parameters corresponding to at least two expression actuators one by one and independent of each other, comprises: And performing visual analysis on the real expression video data, and calculating at least two silence delay parameters which are in one-to-one correspondence with at least two expression executing mechanisms and are mutually independent according to visual analysis results.
  5. 5. The method according to claim 4, wherein the visual analysis is performed on the real human expression video data, and at least two silence delay parameters which are in one-to-one correspondence with at least two expression executing mechanisms and are independent of each other are calculated according to a visual analysis result, comprising: performing visual analysis on the real expression video data to obtain at least two action intensity time sequence data corresponding to at least two expression actions one by one; according to the at least two action intensity time sequence data, determining a certain expression action with the earliest starting moment from the at least two expression actions; For each action in the at least two expression actions, firstly determining a corresponding starting moment according to corresponding action intensity time sequence data, then calculating the time difference between the starting moment and the starting moment of a certain expression action to obtain a corresponding relative time difference, and finally determining a silence delay parameter of a corresponding expression executing mechanism according to the relative time difference.
  6. 6. The multi-position expression cooperative control device of the anthropomorphic robot is characterized by comprising a control signal acquisition unit, a quiet delay parameter acquisition unit, a quiet delay starting unit and an actuating mechanism triggering unit which are sequentially in communication connection; the control signal acquisition unit is used for acquiring a target expression control signal; the static delay parameter obtaining unit is used for responding to the target expression control signal in a mode of obtaining at least two static delay parameters which are in one-to-one correspondence with at least two expression executing mechanisms of the anthropomorphic robot and are mutually independent; the silence delay starting unit is used for synchronously starting at least two silence delays which are respectively based on the at least two silence delay parameters and correspond to the at least two expression executing mechanisms one by one; The executing mechanism triggering unit is used for triggering and driving the corresponding expression executing mechanism when the corresponding delay is finished aiming at each of the at least two silence delays.
  7. 7. The multi-position expression cooperative control system of the anthropomorphic robot is characterized by comprising remote control equipment, remote control service equipment, a bionic action workshop and a robot terminal, wherein the robot terminal is integrated with a plurality of expression executing mechanisms; the remote control device is in communication connection with the remote control service device and is used for sending out a target expression control signal and transmitting the target expression control signal to the remote control service device; the remote control service equipment is in communication connection with the bionic action workshop and is used for forwarding the target expression control signal to the bionic action workshop; the bionic action workshop is in communication connection with the robot terminal and is used for executing the anthropomorphic robot multi-position expression cooperative control method according to any one of claims 1-5 after receiving the target expression control signal.
  8. 8. The computer equipment is characterized by comprising a storage module, a processing module and a receiving and transmitting module which are sequentially connected in a communication mode, wherein the storage module is used for storing a computer program, the receiving and transmitting module is used for receiving and transmitting messages, and the processing module is used for reading the computer program and executing the multi-position expression cooperative control method of the anthropomorphic robot according to any one of claims 1-5.
  9. 9. A computer readable storage product, wherein instructions are stored on the computer readable storage product, and when the instructions are run on a computer, the anthropomorphic robot multi-part expression cooperative control method according to any one of claims 1 to 5 is executed.
  10. 10. A computer program product comprising a computer program or instructions, characterized in that the computer program or instructions, when executed by a computer, implement the anthropomorphic robot multi-part expression cooperative control method according to any one of claims 1-5.

Description

Multi-position expression cooperative control method, device, system, equipment and product of anthropomorphic robot Technical Field The invention belongs to the technical field of bionic persons, and particularly relates to a multi-position expression cooperative control method, device, system, equipment and product of an anthropomorphic robot. Background With the rapid development of humanoid robots and social robot technologies, the naturalness and personification level of facial expressions of the humanoid robots and the social robot technologies have become keys for improving human-computer interaction experience and establishing emotion connection. However, the current technical path for realizing the realistic expression of the robot still has significant limitation in the aspect of time sequence cooperative control, so that the expression of the robot is always stiff and mechanical, and fine emotion is difficult to transfer. At present, the main anthropomorphic expression control mode mainly follows the following two paths: (1) The method has the inherent defects that the control granularity is rough, namely the instruction usually only comprises what expression is done, and the method extremely lacks fine planning of the process of how to perform in sequence, so that the motions of a plurality of facial parts present highly synchronous mechanical feeling, and the method cannot simulate the inherent biological characteristics of synchronous response but asynchronous execution of human facial multi-muscle groups after receiving nerve signals, so that the generated expression lacks the unique dynamic rhythm of a living body; (2) The expression control based on visual simulation is characterized in that the system captures human expressions through sensors such as cameras and drives a robot to simulate, the current technology mainly focuses on the spatial form (such as the angle and the height of a specific part) and the motion track of a reproduction target expression, but generally neglects the accurate reduction of the time sequence dynamics behind the surface, and particularly, the existing system is difficult to recognize and reproduce subtle starting time sequence differences among a plurality of parts in the source expression (for example, the slight delay shrinkage of orbicularis oculi is critical for distinguishing 'smile' from 'etiquette smile'), so that the generated simulated expression is always 'shaped and not like', and is vivid and unnatural. In summary, the prior art has a common blind area at the architecture level, namely, the prior art lacks a control core capable of uniformly processing the multi-part expression collaborative timing and having universality. The prior proposal either excessively couples the time sequence control with a specific instruction source, resulting in poor expansibility, or completely ignores the key value of the time sequence information for the degree of simulation. The root cause is that the two levels of understanding of expression intention and timing coordination of expression execution cannot be effectively decoupled, so that the multi-part collaborative expression which accords with biomechanical rules and is highly natural cannot be generated. Thus, there is an urgent need in the art for an innovative control scheme to solve the above-mentioned problems. Disclosure of Invention The invention aims to provide a multi-position expression cooperative control method, a device, a system, computer equipment, a computer readable storage product and a computer program product for a anthropomorphic robot, which are used for solving the problems of mechanical feel and unnaturalness caused by lack of independent asynchronous time sequence planning of each executing mechanism in the existing anthropomorphic expression control scheme. In order to achieve the above purpose, the present invention adopts the following technical scheme: in a first aspect, a method for collaborative control of multiple expression parts of an anthropomorphic robot is provided, including: Acquiring a target expression control signal; the target expression control signal is responded in the following way that at least two silence delay parameters which are in one-to-one correspondence with at least two expression executing mechanisms of the anthropomorphic robot and are mutually independent are obtained; Synchronously starting at least two silence delays which are respectively based on the at least two silence delay parameters and correspond to the at least two expression executing mechanisms one by one; For each of the at least two silence delays, triggering and driving the corresponding expression executing mechanism when the corresponding delay is over. Based on the above summary, a new control scheme capable of uniformly processing multi-part expression cooperative time sequence and having universality is provided, namely, a target expression control signal is firstly obtained, then