KR-20260066863-A - Embedding Dynamic Personas in Interactive Robots
Abstract
The present invention relates to the application of artificial intelligence and the utilization of robots, and in particular to the application of a large language model and a dual-arm robot. According to an embodiment of the present invention, a robot system is provided that recognizes a user's 3D pose to analyze non-verbal signals, automatically switches dynamic personas using a large language model, and controls the robot's movements and facial expressions according to the persona.
Inventors
- 최성준
- 박정은
Assignees
- 고려대학교 산학협력단
Dates
- Publication Date
- 20260512
- Application Date
- 20241105
Claims (1)
- A robot system that recognizes a user's 3D pose, analyzes non-verbal signals, automatically switches dynamic personas using a large language model, and controls the robot's movements and facial expressions according to the persona.
Description
Persona-based Robot-Human Interaction {Embedding Dynamic Personas in Interactive Robots} The present invention relates to the application of artificial intelligence and robot utilization, and more specifically to the application of a large language model and a dual-arm robot. With the recent introduction of massive language models such as OpenAI’s ChatGPT and GPT-4, many people are utilizing these models. Against this technological backdrop, the application of robots equipped with massive language models is garnering attention. In particular, there is a demand for the development of systems capable of implementing various personas while interacting with humans through non-verbal signals. Against this backdrop, the present invention proposes a robot system capable of dynamically expressing various personas. Detailed descriptions of each drawing are provided to help to more fully understand the drawings cited in the detailed description of the present invention. Figure 1 is an overall flowchart of the present invention. Specific structural or functional descriptions of embodiments according to the concept of the present invention disclosed herein are provided merely for the purpose of explaining embodiments according to the concept of the present invention, and embodiments according to the concept of the present invention may be implemented in various forms and are not limited to the embodiments described herein. Since embodiments according to the concept of the present invention may be subject to various modifications and may take various forms, embodiments are illustrated in the drawings and described in detail in this specification. However, this is not intended to limit the embodiments according to the concept of the present invention to specific disclosed forms, and includes all modifications, equivalents, or substitutions that fall within the spirit and scope of the present invention. Terms such as "first" or "second" may be used to describe various components, but said components should not be limited by said terms. For the sole purpose of distinguishing one component from another, for example, without departing from the scope of rights according to the concept of the present invention, the first component may be named the second component and similarly the second component may be named the first component. When it is stated that one component is "connected" or "connected" to another component, it should be understood that while it may be directly connected or connected to that other component, there may also be other components in between. Conversely, when it is stated that one component is "directly connected" or "directly connected" to another component, it should be understood that there are no other components in between. Other expressions describing the relationships between components, such as "between" and "exactly between," or "adjacent to" and "directly adjacent to," should be interpreted in the same way. The terms used herein are used merely to describe specific embodiments and are not intended to limit the invention. Singular expressions include plural expressions unless the context clearly indicates otherwise. In this specification, terms such as “comprising” or “having” are intended to indicate the existence of the features, numbers, steps, actions, components, parts, or combinations thereof described herein, and should not be understood as precluding the existence or addition of one or more other features, numbers, steps, actions, components, parts, or combinations thereof. Unless otherwise defined, all terms used herein, including technical or scientific terms, have the same meaning as generally understood by those skilled in the art to which the present invention pertains. Terms such as those defined in commonly used dictionaries should be interpreted as having a meaning consistent with their meaning in the context of the relevant technology, and should not be interpreted in an ideal or overly formal sense unless explicitly defined in this specification. Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings attached to this specification. However, the scope of the patent application is not limited or restricted by these embodiments. Identical reference numerals in each drawing indicate identical components. The present invention proposes a system in which a robot can implement various personas. This system is broadly composed of three main modules: a recognition engine, a behavior selection engine, and a motion library. 1. Recognition Engine: The robot detects the 3D pose of the user's body in real time through a 3D RGBD camera and analyzes the user's nonverbal signals (e.g., hand position, gaze, distance). Based on the recognized data, the robot calculates its curiosity score to determine which user to interact with, enabling interaction between the user and the robot. 2. Behavior Selection Engine: The robot uses a fixed state mac