Search

CN-115463424-B - Virtual character display control method and device and electronic equipment

CN115463424BCN 115463424 BCN115463424 BCN 115463424BCN-115463424-B

Abstract

The invention provides a display control method, a device and electronic equipment of a virtual character, relating to the technical field of games, wherein the method comprises the following steps: firstly, predicting the behavior action of the virtual character by utilizing a strategy model according to the background information of the virtual character to obtain a target behavior action, then utilizing a pre-trained generating pre-training language model to generate a reply text of the virtual character, and finally controlling the virtual character to execute the target behavior action and display the reply text so as to generate the character response of the virtual character. According to the method, the virtual roles are controlled in a mode of predicting the behavior actions and generating the reply text by the model, so that the role response is generated, the editing cost is greatly reduced, the generation efficiency of the role response is improved, the requirement of diversified actual scenes is met, and the game experience is improved.

Inventors

  • ZHANG LINJIAN
  • GUO SUIBING
  • SONG YOUWEI
  • Wang Shuopi
  • ZHANG CONG
  • FAN CHANGJIE
  • HU ZHIPENG

Assignees

  • 网易(杭州)网络有限公司

Dates

Publication Date
20260505
Application Date
20220719

Claims (8)

  1. 1. A display control method of a virtual character, comprising: Predicting the behavior action of the virtual character by utilizing a strategy model according to the pre-generated background information of the virtual character to obtain the target behavior action of the virtual character, wherein the behavior action comprises a first limb action and a language action of the virtual character; using a description text corresponding to the background information of the virtual character as input, and generating a reply text of the virtual character by using a pre-trained generating pre-training language model; controlling the virtual character to execute the target behavior action and displaying the reply text so as to generate a character reaction of the virtual character; the virtual character background information comprises player information, environment information and NPC information, wherein the plurality of dimensions at least comprise player information, limb actions, language behaviors, follow-up states and environment information; The method comprises the steps of obtaining player information in a current game scene, wherein the player information comprises a player character list, and each player character list comprises a player character name, a occupation and a label; Splicing the occupation of each player character name in the player character table with the description text to generate a final description text of the current player character; The method comprises the steps of generating background information of a virtual character according to character information description texts with multiple dimensions, which are obtained in advance, and further comprises the steps of obtaining NPC information in a current game scene, wherein the NPC information comprises an NPC table, and the NPC table comprises NPC names, professions and labels; Splicing the occupation of each NPC name in the NPC table with the description text to generate a final description text of the current NPC; after the step of generating the reply text of the virtual character by using the pre-trained generating pre-training language model and taking the descriptive text corresponding to the background information of the virtual character as input, the method further comprises the steps of: And predicting the state information of the virtual character by utilizing a strategy model according to the pre-generated background information of the virtual character and the reply text of the virtual character, and determining the subsequent state of the virtual character, wherein the subsequent state comprises the second limb action of the virtual character.
  2. 2. The display control method of a virtual character according to claim 1, further comprising, after the step of controlling the virtual character to perform the target behavior action and displaying the reply text to generate a character reaction of the virtual character: and controlling the virtual character to execute the second limb action.
  3. 3. The method for controlling the display of a virtual character according to claim 1, wherein the policy model comprises a PPL policy and a Seq2Seq model, and wherein the PPL policy is generated based on a trained GPT model; Predicting the behavior action of the virtual character by utilizing a strategy model according to the pre-generated background information of the virtual character to obtain the target behavior action of the virtual character, wherein the method comprises the following steps: according to the background information of the virtual character which is generated in advance, predicting by using a GPT model, and generating a first prediction result of the behavior action; according to the background information of the virtual character which is generated in advance, predicting by utilizing a Seq2Seq model, and generating a second prediction result of the behavior action; And determining a target behavior action based on the behavior actions corresponding to the first prediction result and the second prediction result.
  4. 4. A display control method of a virtual character according to claim 3, wherein the step of determining a target behavior action based on the behavior actions corresponding to the first predicted result and the second predicted result comprises: when the first predicted result is the same as the second predicted result, determining that the behavior action corresponding to the first predicted result or the second predicted result is a target behavior action; When the first prediction result is different from the second prediction result, and one of the first prediction result and the second prediction result is not empty, determining that the behavior action corresponding to the prediction result which is not empty is a target behavior action; and when the first predicted result is different from the second predicted result and neither the first predicted result nor the second predicted result is empty, determining that the behavior action corresponding to the first predicted result is a target behavior action.
  5. 5. A display control device for a virtual character, comprising: The system comprises a target behavior action determining module, a strategy model and a target behavior action determining module, wherein the target behavior action determining module is used for predicting the behavior action of the virtual character by utilizing the strategy model according to the pre-generated background information of the virtual character to obtain the target behavior action of the virtual character; the reply text generation module is used for taking a description text corresponding to the background information of the virtual character as input and generating a reply text of the virtual character by utilizing a pre-trained generation type pre-training language model; the display control module is used for controlling the virtual character to execute the target behavior action and displaying the reply text so as to generate a character reaction of the virtual character; The virtual character background information generation module is used for generating virtual character background information according to character information description texts of a plurality of dimensions, wherein the virtual character background information comprises player information, environment information and NPC information; the background information generation module is further used for acquiring player information in a current game scene, wherein the player information comprises a player character table, each player character table comprises a player character name, a occupation and a label, each label corresponds to at least one descriptive text, and the occupation of each player character name in the player character table and the descriptive text are spliced to generate a final descriptive text of the current player character; The background information generation module is further used for acquiring NPC information in a current game scene, wherein the NPC information comprises an NPC table, each NPC table comprises an NPC name, a occupation and a label, each label corresponds to at least one description text, and the occupation of each NPC name in the NPC table and the description text are spliced to generate a final description text of the current NPC; Further comprises: The system comprises a virtual character generating module, a follow-up state generating module and a strategy model, wherein the virtual character generating module is used for generating virtual character background information and virtual character reply text in advance, predicting the state information of the virtual character by utilizing the strategy model and determining the follow-up state of the virtual character, and the follow-up state comprises a second limb action of the virtual character.
  6. 6. The display control device of claim 5, wherein the display control module is further configured to control the virtual character to perform the second limb action.
  7. 7. An electronic device comprising a memory, a processor, said memory having stored thereon a computer program executable on said processor, characterized in that said processor, when executing said computer program, implements the steps of the display control method of the virtual character according to any one of the preceding claims 1 to 4.
  8. 8. A computer readable storage medium storing machine executable instructions which, when invoked and executed by a processor, cause the processor to execute the display control method of the virtual character of any one of claims 1 to 4.

Description

Virtual character display control method and device and electronic equipment Technical Field The present invention relates to the field of game technologies, and in particular, to a method and an apparatus for controlling display of a virtual character, and an electronic device. Background In a massively multiplayer online role-playing game (MASSIVELY MULTIPLAYER ONLINE ROLE-PLAYING GAME, MMORPG), a number of Non-player characters (Non-PLAYER CHARACTERS, NPCs) are typically provided that are not manipulated by a real player and can interact with the game player. In-game character (NPC or gamer) reactions to surrounding environment (other NPC, gamer, weather, time, etc.) such as limb movements, emotional movements, text replies, and subsequent states are generally collectively referred to as "character reactions". In a related technology, role reaction under a game scene is realized by a manual editing mode, states corresponding to corresponding instructions are written into virtual roles in advance, corresponding role reaction is made under characteristic instructions, and a pre-edited action return loop is generally adopted. In another related art, text replies to characters are often fixed question-and-answer templates, such as Microsoft ice, arin honey, and Meena, facebook of Google, depending on the form of a question-and-answer, in some practical scenarios, characters need to generate replies according to changes in the surrounding environment, a "question" is just one dimension in the surrounding environment, and weather conditions, current time, actions performed by players, etc. are all dimensions that need to be considered in the environment. However, in order to increase the diversity of character reactions and enhance the game experience of players, a plurality of texts are usually edited in each state, so that the number of texts is actually increased by geometric multiples, and the editing cost of large-scale data is very large for game texts. For character text reply adopting a question-answering template, fixed replies can be generated only according to keywords in a question, and adjustment can not be made in time in the face of the condition of surrounding environment change, namely the requirement of diversified actual scenes can not be met. That is, in the existing MMORPG, the virtual character reaction cannot meet the requirement of the diversified actual scene, and there are technical problems of huge editing cost, small scene application range and poor game experience. Disclosure of Invention Accordingly, the present invention aims to provide a method, a device and an electronic device for controlling the display of a virtual character, so as to solve the problems of high character response cost, small scene application range and poor game experience of editing the virtual character in the prior art. In order to achieve the above object, the technical scheme adopted by the embodiment of the invention is as follows: According to a first aspect, an embodiment of the present invention provides a method for controlling display of a virtual character, including predicting, according to pre-generated background information of the virtual character, a behavior action of the virtual character by using a policy model to obtain a target behavior action of the virtual character, where the behavior action includes a first limb action and a language action of the virtual character, generating a reply text of the virtual character by using a pre-trained generated pre-training language model with a descriptive text corresponding to the background information of the virtual character as input, and controlling the virtual character to execute the target behavior action and display the reply text to generate a character response of the virtual character. In one possible implementation manner, after the step of generating the reply text of the virtual character by using the pre-trained generating pre-training language model and using the descriptive text corresponding to the background information of the virtual character as input, the method further comprises the steps of predicting the state information of the virtual character by using a strategy model according to the pre-generated background information of the virtual character and the reply text of the virtual character to determine the subsequent state of the virtual character, wherein the subsequent state comprises the second limb action of the virtual character. In one possible implementation, after the step of controlling the avatar to perform the target behavior action and displaying the reply text to generate a character response of the avatar, controlling the avatar to perform the second limb action is further included. In one possible implementation, the policy models include a PPL policy and a Seq2Seq model; the PPL strategy is generated based on a trained GPT model; the method comprises the steps of predicting behavior actions of a virtua