Search

CN-121999770-A - Control method of bionic toy, bionic toy and readable storage medium

CN121999770ACN 121999770 ACN121999770 ACN 121999770ACN-121999770-A

Abstract

The invention relates to the technical field of bionic toys, in particular to a control method of a bionic toy, the bionic toy and a readable storage medium. The method comprises the steps of converting voice information of a user into text information, providing a preset keyword database, a preference database, an emotion database and an intention database, judging whether the text information is matched with the preset keyword database, if not, converting keywords into keyword information based on a preset model, classifying the keyword information to obtain a classification result, judging whether the keyword information is matched with the preference database, if so, executing corresponding feedback based on the classification result, if not, judging whether the keyword information is matched with the emotion database, if so, outputting corresponding emotion value variables, and if not, executing feedback matched with the intention database based on the keyword information. The invention solves the problem of single action in the bionic function control process.

Inventors

  • XU XIAOXIANG
  • YU QIANG
  • FENG XIANGPENG
  • CHEN SHENGJIE
  • LU BIN
  • YE RUIFENG
  • LI NANXI
  • HUANG QIANG
  • YU XIANG
  • FAN WENJUAN
  • SUN LI

Assignees

  • 四川酷盼科技有限公司

Dates

Publication Date
20260508
Application Date
20251230

Claims (12)

  1. 1. A control method of a bionic toy is characterized by comprising the following steps: providing a bionic toy, wherein the bionic toy acquires interaction information; it is determined whether the interactive information is user voice information, If not, the bionic toy controls to execute feedback corresponding to the interaction information; If yes, converting the voice information of the user into text information; providing a preset keyword database, a preference database, an emotion database and an intention database; Judging whether the text information is matched with a preset keyword database, if so, executing corresponding feedback based on the text information, otherwise, converting keywords into keyword information based on a preset model, and classifying the keyword information to obtain a classification result; Judging whether the keyword information is matched with the preference database, if so, executing corresponding feedback based on the classification result, and if not, judging whether the keyword information is matched with the emotion database; If the keyword information is matched with the emotion database, outputting a corresponding emotion value variable; and if the keyword information is not matched with the emotion database, matching the keyword information with the intention database, and executing corresponding feedback based on the keyword information.
  2. 2. The method for controlling a bionic toy according to claim 1, wherein the step of obtaining the emotion value variable based on the user voice information comprises the step of obtaining the interaction information including any one or a combination of a plurality of touch information, sight line following information, preset voice information, connection device information and internal state information and the user voice information.
  3. 3. The method of claim 1, wherein outputting the corresponding emotion value variable further comprises performing feedback corresponding to the emotion value variable based on the emotion value variable and/or the interest value variable, wherein the feedback comprises performing an action on at least one portion of the bionic toy, sounding the bionic toy, or moving different portions of the bionic toy in a preset order and combining the actions into a set of actions.
  4. 4. The method for controlling a bionic toy according to claim 1, wherein the step of outputting the corresponding emotion value variable further comprises: Providing a bionic toy, wherein the bionic toy acquires initial interaction information and emotion value variables; acquiring initial characters based on the initial interaction information; directly outputting the initial character as the current character of the bionic toy; or, acquiring new interactive information under the initial character, updating the initial character based on the new interactive information, and taking the updated character as the current character of the bionic toy; acquiring an emotion value basic value, an emotion value change factor and a natural attenuation factor corresponding to the current personality based on the current personality; Calculating and obtaining a final emotion value based on an emotion value basic value, an emotion value change factor, a natural attenuation factor and an emotion value variable corresponding to the current character; and executing corresponding feedback based on the final emotion value to complete emotion output.
  5. 5. The method of claim 4, wherein obtaining new interaction information under the initial character comprises: Obtaining new interactive information under the initial character, obtaining the interactive time and/or the interactive times when the new interactive information is obtained, obtaining a new interactive value based on the interactive time and/or the interactive times, judging whether the new interactive value meets the range of the preset character change value, if not, overlapping the new interactive information and the initial interactive information, then continuing iteration processing as new initial interactive information, and if so, outputting the preset character corresponding to the preset character change value as the current character of the bionic toy.
  6. 6. The method for controlling a bionic toy according to claim 4, wherein the obtaining initial character based on the initial interaction information comprises: the acquiring the grid value variable based on the initial interaction information comprises the following steps: Acquiring an sexual grid value variable and a growth value variable based on the initial interaction information; Updating the current growth stage based on the growth value variable to obtain a character change coefficient corresponding to the current growth stage; obtaining initial character based on the character initial value, character value variable and character change coefficient; wherein the growth value variable is obtained based on a preset growth value formula; the preset growth value formula is: + ; where N represents the current growth value variable, N 0 represents the initial growth value as a known value, m is the number of each time period, Indicating the cumulative growth change value after the ith time, The growth change value of the i-th time period is shown.
  7. 7. The method for controlling a bionic toy according to claim 6, wherein obtaining the initial character based on the character initial value, the character value variable, and the character change coefficient comprises: The character value variable comprises a parent value variable and an active value variable, a first interaction value is obtained based on initial interaction information, and the parent value variable of the bionic toy is obtained based on the first interaction value; Acquiring a real-time parent value and a real-time liveness value based on a preset personality value formula, a personality initial value, a personality value variable and a personality change coefficient; obtaining initial character based on real-time parent value and real-time liveness value The preset grid value formula is as follows: Ax=Axo+M1*Q1 Ay=Ayo+M2*Q2 Ax represents a real-time parent value, axo represents an initial value of the parent value, M1 represents a parent value change coefficient of a growth stage corresponding to the bionic toy, Q1 represents a first interaction value, ay represents a real-time liveness value, ayo represents an initial value of the liveness value, M2 represents a growth stage liveness value change coefficient corresponding to the bionic toy, and Q2 represents a second interaction value.
  8. 8. The method of claim 4, wherein calculating a final emotion value comprises calculating an emotion value variable comprising an restless value variable and an interest value variable; the restless value variable and the interest value variable are obtained through calculation of a preset emotion value formula; The preset emotion value formula is as follows: X=X1+X’X1=Xo+k1t Y=Y1+Y’Y1=Yo+k2t Wherein X represents an uneasy value variable, X1 represents an uneasy value base, xo represents an uneasy value of the last interaction, X 'represents an uneasy value obtained at this time, k1 represents an uneasy value attenuation coefficient of the corresponding character, k1t represents a value of natural attenuation of the uneasy value of the corresponding character, Y represents an interest value variable, Y1 represents an interest value base, yo represents an interest value of the last interaction, Y' represents an interest value obtained at this time, k2 represents an interest value attenuation coefficient of the corresponding character, t represents a time of no interaction of the bionic toy, and k2t represents a value of natural attenuation of the interest value of the corresponding character.
  9. 9. The method of claim 4, wherein performing the feedback to complete the emotion output based on the final emotion value comprises: The final mood values include an restlessness value and an interest value; establishing a two-dimensional coordinate system by taking the uneasiness value and the interest value as coordinate axes of the coordinate system; dividing the region of the two-dimensional coordinate system into at least two emotion regions; Inputting an restlessness value and an interest value of the final emotion value, and acquiring an emotion region in which the final emotion value falls at a falling point of a two-dimensional coordinate system; And outputting the emotion of the final emotion value based on the emotion region, and performing corresponding feedback based on the emotion.
  10. 10. The method of claim 4, wherein the feedback comprises at least one portion of the toy performing an action, the toy making a sound, or different portions of the toy moving in a predetermined sequence and combining into a set of actions; When the bionic toy is in different sexes, after the bionic toy obtains the same emotion value variable, feedback executed by the bionic toy based on the final emotion value is different.
  11. 11. A bionic toy for applying the control method of the bionic toy according to any one of claims 1-10, which is characterized by comprising a toy body, a touch sensing module, a measuring module, an audible position identifying module, a voice module, a signal transmission module, an electric quantity module and a control module electrically connected with the touch sensing module, the measuring module, the audible position identifying module, the voice module, the signal transmission module and the electric quantity module respectively; the touch sensing module is used for acquiring touch information; The measuring module is used for acquiring sight line following information; The hearing position identifying module is used for acquiring sound and judging the position of the sound; the voice module is used for acquiring preset voice information or user voice information; The signal transmission module is used for acquiring the information of the connecting equipment; the electric quantity module is used for acquiring internal state information; the control module is used for controlling the bionic toy to perform feedback.
  12. 12. A readable storage medium, wherein computer instructions for causing the computer to execute a control method applying the bionic toy according to any one of claims 1 to 10 are stored in the readable storage medium.

Description

Control method of bionic toy, bionic toy and readable storage medium Technical Field The invention relates to the technical field of bionic toys, in particular to a control method of a bionic toy, the bionic toy and a readable storage medium. Background In the field of toys and accompanying robots which are currently and vigorously developed, toys with bionic functions, such as machine dogs, machine cats and the like, are gradually popularized. The products can realize a series of predefined actions such as walking, tail shaking, head turning, sounding and the like through built-in programs and motor driving, and can attract the attention of users in the initial stage. The action feedback is seriously dependent on pre-written scripts or simple direct instructions, and lacks the perception and understanding ability of environment, especially the emotional state of a user. Therefore, a control method of the bionic toy is needed to solve the problems that the bionic function toy is single in action in the control process and cannot make action feedback based on the emotion of a person in real time. Disclosure of Invention In order to solve the problem that the bionic function control process is single in action and cannot make action feedback based on emotion of a person in real time, the invention provides a bionic toy control method, a bionic toy and a readable storage medium. The bionic toy control method comprises the steps of providing a bionic toy, obtaining interaction information by the bionic toy, judging whether the interaction information is user voice information, if not, controlling the bionic toy to execute feedback corresponding to the interaction information, if so, converting the user voice information into text information, providing a preset keyword database, a preference database, an emotion database and an intention database, judging whether the text information is matched with the preset keyword database, if so, executing corresponding feedback based on the text information, if not, converting keywords into keyword information based on a preset model, classifying the keyword information to obtain classification results, judging whether the keyword information is matched with the preference database, if so, executing corresponding feedback based on the classification results, if not, judging whether the keyword information is matched with the emotion database, if not, outputting corresponding emotion values, and if not, executing corresponding emotion variables based on the keyword information and the intention database. The accuracy and the diversity of the interaction response of the bionic toy to the user are ensured, and the problem that the bionic function toy is single in action in the control process and cannot make action feedback based on the emotion of the person in real time is solved. Preferably, the method comprises the steps of obtaining emotion value variables based on user voice information, wherein the interaction information comprises initial interaction information and user voice information, the initial interaction information comprises touch information, sight line following information, and any one or more of preset voice information, connection equipment information or internal state information. Preferably, outputting the corresponding emotion value variable further comprises the steps that the emotion value variable comprises an restless value variable and an interest value variable, feedback corresponding to the emotion value variable is executed based on the restless value variable and/or the interest value variable, and the feedback comprises at least one part of the bionic toy executing action, the bionic toy making a sound or different parts of the bionic toy moving according to a preset sequence and combining the actions into a set. Preferably, the method further comprises the steps of providing a bionic toy, obtaining initial interaction information and emotion value variables through the bionic toy, obtaining initial characters based on the initial interaction information, directly outputting the initial characters as current characters of the bionic toy, or obtaining new interaction information under the initial characters, updating the initial characters based on the new interaction information, using the updated characters as current characters of the bionic toy, obtaining emotion value basic values, emotion value change factors and natural attenuation factors corresponding to the current characters based on the current characters, obtaining final emotion values through calculation based on the emotion value basic values, emotion value change factors, natural attenuation factors and emotion value variables corresponding to the current characters, and executing corresponding feedback based on the final emotion value bionic toy to complete emotion output. Ensures that each emotion output is truly reflected on the unique character of the toy, thereby solvin