Search

CN-121989198-A - Robot interaction method, social robot, medium and equipment

CN121989198ACN 121989198 ACN121989198 ACN 121989198ACN-121989198-A

Abstract

The application relates to the technical field of social robots, in particular to a robot interaction method, a social robot, a medium and equipment. The application recognizes the emotion of the talker based on the face image of the talker, determines the target behavior state of the robot according to the emotion of the talker, and controls the robot based on the target behavior state so that the robot responds to the emotion change of the talker by presenting the target behavior state. From the analysis, the robot fully considers the emotion change of the talker when interacting with the talker, so that the interaction effect of the robot and the talker is enhanced.

Inventors

  • LI XUELIANG
  • QI RUI
  • Kong Zhilei

Assignees

  • 南方科技大学

Dates

Publication Date
20260508
Application Date
20260312

Claims (10)

  1. 1. A method of robot interaction, comprising: collecting face images of a talker, and obtaining emotion of the talker by identifying the face images; obtaining a target behavioral state of the robot based on the emotion of the talker; based on the target behavior state, the robot is controlled such that the robot interacts with the talker.
  2. 2. The robot interaction method of claim 1, wherein collecting facial images of a talker, obtaining emotion of the talker by recognizing the facial images, comprises: determining a current speaker of the talkers when the mode of the robot is a participant mode; and acquiring a face image of the current speaker, and applying a YOLO11n emotion recognition model to the face image to obtain the emotion of the current speaker.
  3. 3. The robot interaction method of claim 2, wherein obtaining the target behavior state of the robot based on the emotion of the talker comprises: and screening target behavior states for responding to the emotion of the current speaker from the preset behavior states of the robot based on the emotion of the current speaker.
  4. 4. The robot interaction method of claim 1, wherein collecting facial images of a talker, obtaining emotion of the talker by recognizing the facial images, comprises: When the mode of the robot is a presenter mode, continuously monitoring the participation degree of each talker in the interaction process of each talker; Screening each talker based on the participation degree of each talker to obtain a target person; and acquiring a face image of the target through an image acquisition device at the top of the robot, and obtaining the emotion of the target through identifying the face image.
  5. 5. The robot interaction method of claim 4, wherein obtaining the target behavior state of the robot based on the emotion of the talker comprises: and screening target behavior states for adjusting the emotion of the target from the predetermined behavior states of the robot based on the emotion of the target.
  6. 6. The robot interaction method of any of claims 1-5, wherein controlling the robot based on the target behavior state comprises: And controlling a display system and a structural member of the robot based on the target behavior state so that the display system and the structural member are mutually matched to enable the robot to present the target behavior state.
  7. 7. The robot interaction method of claim 6, wherein controlling the display system and structure of the robot based on the target behavior state comprises: when the target behavior state is positive, controlling the structural members to reciprocate, and controlling the display system to present a positive visual effect; Or when the target behavior state is a negative state, controlling the structural members to generate relative unidirectional movement, and controlling the display system to display a negative visual effect; or when the target behavior state is a neutral state, controlling the structural member to do pitching motion, and controlling the display system to present the visual effect of gazing.
  8. 8. A social robot, characterized by being applied to implement the robot interaction method as claimed in claim 1, comprising the following components: A base structure; The upper structure is hinged with the base structure up and down, and is a multi-degree-of-freedom structure used for changing the behavior state of the robot; And the top structure is hinged with the upper structure, and is provided with a function of collecting face images of the talker.
  9. 9. A terminal device, characterized in that it comprises a memory, a processor and a robot interaction program stored in the memory and executable on the processor, which processor, when executing the robot interaction program, realizes the steps of the robot interaction method according to any of claims 1-7.
  10. 10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a robot interaction program, which, when executed by a processor, implements the steps of the robot interaction method according to any of claims 1-7.

Description

Robot interaction method, social robot, medium and equipment Technical Field The invention relates to the technical field of social robots, in particular to a robot interaction method, a social robot, a medium and equipment. Background And introducing a social robot into the social scene, and controlling the social robot to interact with the user in a communication way. In the prior art, by controlling a social robot to make a predetermined action to achieve communication interaction with a user, for example, the user makes a handshake action, the social robot also makes a corresponding handshake action. However, in the interaction process, the emotion change of the user is ignored, so that the social robot cannot respond to the emotion change of the user, and the interaction effect is reduced. In summary, the prior art reduces the interaction effect between the robot and the user. Accordingly, there is a need for improvement and advancement in the art. Disclosure of Invention In order to solve the technical problems, the invention provides a robot interaction method, a social robot, a medium and equipment, and solves the problem that the interaction effect of the robot and a user is reduced in the prior art. In order to achieve the above purpose, the present invention adopts the following technical scheme: in a first aspect, the present invention provides a robot interaction method, including: collecting face images of a talker, and obtaining emotion of the talker by identifying the face images; obtaining a target behavioral state of the robot based on the emotion of the talker; based on the target behavior state, the robot is controlled such that the robot interacts with the talker. In one implementation, collecting a facial image of a talker, obtaining an emotion of the talker by identifying the facial image, includes: determining a current speaker of the talkers when the mode of the robot is a participant mode; and acquiring a face image of the current speaker, and applying a YOLO11n emotion recognition model to the face image to obtain the emotion of the current speaker. In one implementation, deriving the target behavioral state of the robot based on the emotion of the speaker includes: and screening target behavior states for responding to the emotion of the current speaker from the preset behavior states of the robot based on the emotion of the current speaker. In one implementation, collecting a facial image of a talker, obtaining an emotion of the talker by identifying the facial image, includes: When the mode of the robot is a presenter mode, continuously monitoring the participation degree of each talker in the interaction process of each talker; Screening each talker based on the participation degree of each talker to obtain a target person; and acquiring a face image of the target through an image acquisition device at the top of the robot, and obtaining the emotion of the target through identifying the face image. In one implementation, deriving the target behavioral state of the robot based on the emotion of the speaker includes: and screening target behavior states for adjusting the emotion of the target from the predetermined behavior states of the robot based on the emotion of the target. In one implementation, controlling the robot based on the target behavior state includes: And controlling a display system and a structural member of the robot based on the target behavior state so that the display system and the structural member are mutually matched to enable the robot to present the target behavior state. In one implementation, controlling a display system and structural components of the robot based on the target behavior state includes: when the target behavior state is positive, controlling the structural members to reciprocate, and controlling the display system to present a positive visual effect; Or when the target behavior state is a negative state, controlling the structural members to generate relative unidirectional movement, and controlling the display system to display a negative visual effect; or when the target behavior state is a neutral state, controlling the structural member to do pitching motion, and controlling the display system to present the visual effect of gazing. In a second aspect, an embodiment of the present invention further provides a social robot, where the social robot is applied to implement the above-mentioned robot interaction method, and the social robot includes the following components: A base structure; The upper structure is hinged with the base structure up and down, and is a multi-degree-of-freedom structure used for changing the behavior state of the robot; And the top structure is hinged with the upper structure, and is provided with a function of collecting face images of the talker. In a third aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes a memory, a processor, and a