CN-122020533-A - Personality digital person construction method and system based on prototype character multi-modal data
Abstract
The application discloses a personality digital person construction method and system based on prototype character multi-mode data, which comprises the steps of obtaining personality original data of a prototype character and preprocessing; extracting language style fingerprint features, emotion-cognition associated features and value view decision features according to the preprocessed personality original data, fusing the language style fingerprint features, the emotion-cognition associated features and the value view decision features to obtain personality gene vectors, inputting the personality gene vectors into a pre-constructed AI model for personality re-carving, and carrying out consistency verification and iterative optimization to obtain the final personality digital person. The application constructs the digital person capable of simulating the personality of the prototype character by extracting and quantifying the personality gene of the prototype character and then internalizing the personality gene into the internal logic driving force of the AI model.
Inventors
- QIAN YU
- LI YICHENG
- SUN ZHONGKAI
Assignees
- 北京齿伦转动科技有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260126
Claims (10)
- 1. A personality digital person construction method based on prototype character multi-modal data is characterized by comprising the following steps: step 1, acquiring personality original data capable of reflecting a prototype character from language data, sound data and behavior data of the prototype character; Step 2, preprocessing the personality original data; Step 3, inputting the preprocessed personality original data into a large language model for deep embedding representation to obtain language style fingerprint characteristics; Step 4, inputting the preprocessed personality original data into an emotion calculation model to analyze emotion expression of a prototype character, and correlating the emotion expression with a cognition root source to obtain emotion-cognition correlation characteristics; extracting the value view of the preprocessed personality original data by using a theme model and causal inference, and simulating the value sequencing and the selection tendency of the prototype character under specific dilemma by using a decision preference tree to obtain a value view decision feature; Step 6, fusing the language style fingerprint feature, the emotion-cognition association feature and the value view decision feature to obtain a personality gene vector; And 7, inputting the personality gene vector into a pre-constructed AI model for personality re-etching, and performing consistency verification and iterative optimization to obtain a final personality digital person.
- 2. The method for constructing a personality digital person based on the prototype personality multimodal data according to claim 1, wherein the preprocessing of the personality raw data in step 2 includes desensitizing, cleaning, removing noise and invalid information, and performing time stamp alignment.
- 3. The method for constructing the personality digital person based on the prototype character multimodal data according to claim 1, wherein in the step 7, when the personality gene vector is input into a pre-constructed AI model for personality re-lithography, the AI model adopts a pre-trained basic dialogue model, and the basic dialogue model needs to perform personality conditioning fine adjustment.
- 4. The method for constructing a personality digital person based on the prototype personality multimodal data of claim 3, wherein the personality conditioning is performed using a P-tuning or LoRA tuning technique.
- 5. The method for constructing a personality digital person based on the prototype character multimodal data according to claim 3, wherein the basic dialogue model adopts a large language model based on a transducer architecture.
- 6. The method for constructing a personality digital person based on the prototype character multimodal data according to claim 1, wherein in step 7, when the personality gene vector is input into a pre-constructed AI model for personality re-lithography, the AI model adopts a pre-model as an execution engine.
- 7. The method for constructing a personality digital person based on multimodal data of prototype characters according to claim 1, wherein in step 7, the consistency verification includes objective consistency verification and subjective graphics verification.
- 8. A personality digital person construction system based on prototype personality multimodal data, comprising: the personality data acquisition module is used for acquiring personality original data capable of reflecting the prototype character from language data, sound data and behavior data of the prototype character; The data preprocessing module is used for preprocessing the personality original data; The language style fingerprint feature extraction module is used for inputting the preprocessed personality original data into a large language model for deep embedding representation to obtain language style fingerprint features; the emotion-cognition association feature extraction module is used for inputting the preprocessed personality original data into an emotion calculation model to analyze emotion expression of a prototype character, and associating the emotion expression with a cognition root source to obtain emotion-cognition association features; the value view decision feature extraction module is used for extracting the value view of the preprocessed personality original data by using the topic model and the causal inference, and simulating the value sequencing and the selection tendency of the prototype character under the specific dilemma through the decision preference tree to obtain the value view decision feature; The feature fusion module is used for fusing the language style fingerprint features, the emotion-cognition association features and the value view decision feature to obtain personality gene vectors; And the personality digital person construction module is used for inputting the personality gene vector into a pre-constructed AI model for personality re-carving, and carrying out consistency verification and iterative optimization so as to obtain a final personality digital person.
- 9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
- 10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
Description
Personality digital person construction method and system based on prototype character multi-modal data Technical Field The application relates to the technical field of artificial intelligence, in particular to a personality digital person construction method and system based on prototype character multi-mode data. Background With the rapid development of technologies such as artificial intelligence, computer graphics, natural language processing, speech synthesis and the like, digital humans (Digital Human) are gradually expanding from film and television special effects and game entertainment fields to a plurality of industrial application scenes such as education, medical treatment, finance, customer service and the like as important carriers for virtual and reality fusion. The digital person is a virtual character with human appearance, behavior and even emotion interaction capability, which is constructed by digital technology, and the core technology of the digital person comprises three-dimensional modeling, motion capture, voice driving, emotion calculation, large model driving intelligent dialogue systems and the like. In recent years, due to the breakthrough of deep learning and generation type AI, the simulation degree, interactivity and intelligence level of digital people are remarkably improved, and the digital people become one of key entries of a new human-computer interaction paradigm. In the prior art, when a digital person is created, the main focus is on 3D modeling of appearance, timbre cloning of sound or imitation of simple dialogue habit, and the reproduction is superficial, fragmented and inconsistent, and as a result, the digital person often occasionally has a sentence or a emotion model, but in continuous interaction, the inherent logic, emotion response mode and value view of the digital person can rapidly expose the 'non-human' nature of the digital person, and deep emotion connection and personality trust cannot be provided. Disclosure of Invention Therefore, the application provides a personality digital person construction method and system based on prototype character multi-mode data, which are used for solving the problem that a digital person cannot simulate a prototype character personality in the prior art. In order to achieve the above object, the present application provides the following technical solutions: In a first aspect, a personality digital person construction method based on prototype personality multimodal data includes: step 1, acquiring personality original data capable of reflecting a prototype character from language data, sound data and behavior data of the prototype character; Step 2, preprocessing the personality original data; Step 3, inputting the preprocessed personality original data into a large language model for deep embedding representation to obtain language style fingerprint characteristics; Step 4, inputting the preprocessed personality original data into an emotion calculation model to analyze emotion expression of a prototype character, and correlating the emotion expression with a cognition root source to obtain emotion-cognition correlation characteristics; extracting the value view of the preprocessed personality original data by using a theme model and causal inference, and simulating the value sequencing and the selection tendency of the prototype character under specific dilemma by using a decision preference tree to obtain a value view decision feature; Step 6, fusing the language style fingerprint feature, the emotion-cognition association feature and the value view decision feature to obtain a personality gene vector; And 7, inputting the personality gene vector into a pre-constructed AI model for personality re-etching, and performing consistency verification and iterative optimization to obtain a final personality digital person. Preferably, in the step 2, the preprocessing of the personality original data includes desensitizing, cleaning, removing noise and invalid information, and performing time stamp alignment on the personality original data. In the step 7, preferably, when the personality gene vector is input into a pre-constructed AI model for personality re-etching, the AI model adopts a pre-trained basic dialogue model, and the basic dialogue model needs to perform personality conditioning fine adjustment. Preferably, the personality conditioning technique is P-tuning or LoRA tuning. Preferably, the basic dialogue model adopts a large language model based on a transducer architecture. Preferably, in step 7, when the personality gene vector is input into a pre-constructed AI model for personality re-etching, the AI model adopts a pre-model as an execution engine. Preferably, in step 7, the consistency verification includes objective consistency verification and subjective Turing verification. In a second aspect, a personality digital person construction system based on prototype personality multimodal data includes: t