Search

KR-20260067941-A - ELECTRONIC DEVICE, METHOD AND NON-VOLATILE COMPUTER READABLE STRORAGE MEDIUM FOR RROVIDING ANIMATION FOR CHARACTER

KR20260067941AKR 20260067941 AKR20260067941 AKR 20260067941AKR-20260067941-A

Abstract

The electronic device includes at least one processor comprising a display, a memory for storing instructions, and a processing circuit. When the instructions are executed individually or collectively by at least one processor, the electronic device acquires context information corresponding to the acquired text when text is acquired through an input UI provided through the display, acquires user emotion information corresponding to each sentence unit of the text based on the context information, acquires an animation character that provides an animation effect in which the character speaks the text while the character's facial expression changes corresponding to each sentence unit of the text based on the emotion information, and inputs the animation character into the input UI.

Inventors

  • 박미지
  • 아흐메드 피 엠디 아르바즈
  • 찬드라 마드하반 프라카시
  • 다스 프라팀 싯다르타
  • 로이 수딥
  • 자 부샨 비샬
  • 박종필
  • 박주현
  • 배동환
  • 신동수
  • 이승민
  • 이채연
  • 아타베일 산제이 안잘리
  • 카다프카르 샤시칸트 락스만

Assignees

  • 삼성전자주식회사

Dates

Publication Date
20260513
Application Date
20241210
Priority Date
20241106

Claims (20)

  1. In an electronic device (100), Display (130); Memory (120) for storing instructions; and It includes at least one processor (110) including a processing circuitry; and When the above instructions are executed individually or collectively by the at least one processor, the electronic device, When text is obtained through the input UI (user interface) provided on the above display, context information corresponding to the obtained text is obtained, and Based on the above context information, user emotion information corresponding to each sentence unit of the above text is obtained, and Based on the above emotional information, an animation character is obtained that provides an animation effect in which the character speaks the text while the character's facial expression changes in correspondence with each sentence unit of the text, and An electronic device that inputs the above-mentioned animated character into the above-mentioned input UI.
  2. In paragraph 1, When the above instructions are executed individually or collectively by the at least one processor, the electronic device, A user's first emotion information is obtained based on context information corresponding to a first text corresponding to a first sentence unit included in the above text, and a user's second emotion information is obtained based on context information corresponding to a second text corresponding to a second sentence unit obtained after the first text. An electronic device that enables the acquisition of an animation character in which the character sequentially changes facial expressions based on the first emotional information and the second emotional information, and sequentially speaks the first text and the second text.
  3. In paragraph 1 or 2, When the above instructions are executed individually or collectively by the at least one processor, the electronic device, An electronic device for identifying context information corresponding to the text based on at least one of keywords included in the text, profile information of the user, the relationship between the user and the recipient to whom the text is transmitted, and the context of a conversation with the recipient.
  4. In paragraph 1 or 2, The above electronic device is, mike; Camera; and It further includes a sensor; When the above instructions are executed individually or collectively by the at least one processor, the electronic device, An electronic device that obtains context information corresponding to the text based on at least one of the voice information of the user obtained through the microphone, facial expression information of the user obtained through the camera, or gesture information of the user obtained through the sensor.
  5. In paragraph 1 or 2, When the above instructions are executed individually or collectively by the at least one processor, the electronic device, An electronic device that inputs at least one of the voice information of the user, the facial expression information of the user, and the gesture information of the user, and the text into a learned artificial intelligence model to obtain the animated character.
  6. In paragraph 1 or 2, When the above instructions are executed individually or collectively by the at least one processor, the electronic device identifies at least one font information for font visualization for each sentence unit of the text based on user sentiment information corresponding to each sentence unit of the text, and Based on the above at least one font information, visualization effects are provided for the fonts of each sentence composition unit of the text entered into the input UI, and The above at least one font information is, An electronic device comprising at least one of font type information, font size information, font color information, font in/out information, font rotation information, or font emphasis information.
  7. In paragraph 1 or 2, When the above instructions are executed individually or collectively by the at least one processor, the electronic device, Based on the pronunciation feature information of the user, the animation character that speaks the text is obtained, and The above user's pronunciation characteristic information is, An electronic device comprising at least one of speed, intonation, tone, stress, phonemic pronunciation, linking, or assimilation.
  8. In paragraph 1 or 2, When the above instructions are executed individually or collectively by the at least one processor, the electronic device, Convert the above text into speech using TTS (text-to-speech), and Based on the pronunciation of the above voice, mouth shape information is obtained in phoneme units, and An electronic device for obtaining the animation character whose mouth shape changes based on the above mouth shape information.
  9. In paragraph 1 or 2, When the above instructions are executed individually or collectively by the at least one processor, the electronic device, Based on the context information of the above text, a UI including multiple characters is provided, and An electronic device that, when one of the above multiple characters is selected, obtains the animation character based on the selected character.
  10. In paragraph 1 or 2, The above input UI is, Provides at least one of a text input UI for inputting text or a voice input widget UI for inputting user voice, and When the above instructions are executed individually or collectively by the at least one processor, the electronic device, An electronic device that acquires the text entered through the text input UI or acquires the text based on the user's voice entered through the voice input widget UI.
  11. In paragraph 1 or 2, The above electronic device is, mike; Camera; and It further includes a sensor; When the above instructions are executed individually or collectively by the at least one processor, the electronic device, While a widget is displayed on the display, emotional information of the user is obtained based on at least one of the user's voice information obtained through the microphone, the user's facial expression information obtained through the camera, and the user's gesture information obtained through the sensor. Based on the emotional information of the user, an animation widget is obtained that provides an animation effect in which the facial expression of a character included in the widget changes. An electronic device that stores the animation widget in the memory or transmits it outside the electronic device according to a user command.
  12. In paragraph 1 or 2, When the above instructions are executed individually or collectively by the at least one processor, the electronic device, When text is received from an external device, user emotion information corresponding to each sentence unit of the received text is obtained based on context information corresponding to the received text, and Based on the above emotional information, an animation character is obtained that provides an animation effect in which the character speaks the text while the character's facial expression changes in correspondence with each sentence unit of the text, and An electronic device that displays the received text and the animated character on the display.
  13. In a method for controlling an electronic device, When text is obtained through an input UI, an operation to obtain context information corresponding to the obtained text; An operation to obtain user emotion information corresponding to each sentence unit of the text based on the above context information; An action of acquiring an animation character that provides an animation effect in which the character speaks the text while the character's facial expression changes in correspondence with each sentence unit of the text based on the emotional information of the user; and A control method comprising the action of inputting the above-mentioned animation character into the above-mentioned input UI.
  14. In Paragraph 13, The operation of acquiring the emotional information of the above-mentioned user is, The operation of obtaining a user's first emotion information based on context information corresponding to a first text corresponding to a first sentence unit included in the text above, and obtaining a user's second emotion information based on context information corresponding to a second text corresponding to a second sentence unit obtained after the first text; and A control method comprising: an action of acquiring an animation character in which the character sequentially changes its facial expression based on the first emotional information and the second emotional information while sequentially speaking the first text and the second text.
  15. In paragraph 13 or 14, The operation of acquiring the emotional information of the above-mentioned user is, A control method comprising: identifying context information corresponding to the text based on at least one of keywords included in the text, profile information of the user, the relationship between the user and the counterpart to whom the text is transmitted, and the conversation context with the counterpart.
  16. In paragraph 13 or 14, The operation of acquiring the emotional information of the above-mentioned user is, A control method comprising: an operation of obtaining context information corresponding to the text based on at least one of the voice information of the user obtained through a microphone, facial expression information of the user obtained through a camera, or gesture information of the user obtained through a sensor.
  17. In paragraph 13 or 14, The action of acquiring the above animation character is, A control method comprising: at least one of the voice information of the user, the facial expression information of the user, and the gesture information of the user, and an action of inputting the text into a learned artificial intelligence model to obtain the animation character.
  18. In paragraph 13 or 14, The above control method is, An operation of identifying at least one font information for font visualization for each sentence unit of the text based on user sentiment information corresponding to each sentence unit of the text; and The operation of providing a visualization effect for each sentence composition unit to the font of the text input into the input UI based on at least one font information above is further included. The above at least one font information is, A control method comprising at least one of font type information, font size information, font color information, font in/out information, font rotation information, or font emphasis information.
  19. In paragraph 13 or 14, The action of acquiring the above animation character is, The operation of acquiring the animation character in which the character speaks the text based on the pronunciation feature information of the user; is included. The above user's pronunciation characteristic information is, A control method comprising at least one of speed, intonation, stress, phoneme pronunciation, linking, or assimilation.
  20. In a non-transient computer-readable medium storing instructions that cause said electronic device to perform an operation when executed by a processor of said electronic device, The above operation is, When text is obtained through an input UI, an operation to obtain context information corresponding to the obtained text; An operation to obtain user emotion information corresponding to each sentence unit of the text based on the above context information; An action of acquiring an animation character that provides an animation effect in which the character speaks the text while the character's facial expression changes in correspondence with each sentence unit of the text based on the emotional information of the user; and A control method comprising the action of inputting the above-mentioned animation character into the above-mentioned input UI.

Description

Electronic device, method and non-volatile computer-readable storage medium for providing animation for a character The present disclosure relates to an electronic device, a method, and a non-transient computer-readable storage medium for providing animation of a character. Recently, the distribution of various types of portable electronic devices, such as smartphones, tablet PCs, wireless earphones, and smartwatches, is expanding. The use of emojis, which can intuitively express various emotions of users, is increasing significantly in services such as text messaging and social network services (SNS) for communication with other users. The information described above may be provided as related art for the purpose of aiding understanding of the present disclosure. No claim or determination is made as to whether any of the foregoing may be applied as prior art related to the present disclosure. The above and other aspects, features, and advantages of specific embodiments of the present disclosure will become more apparent from the following description taken together with the accompanying drawings. FIG. 1 is a diagram illustrating the schematic operation of an electronic device according to one embodiment. FIG. 2 illustrates an example of a block diagram of an electronic device according to one embodiment. FIG. 3 is a flowchart illustrating the operation of an electronic device according to one embodiment. FIG. 4 is a drawing for illustrating a generative artificial intelligence model according to an embodiment of the present disclosure. FIGS. 5a to 5e are drawings for explaining a method of providing a UI screen according to one embodiment. FIGS. 6a and 6b are drawings for explaining a method of providing a UI screen according to one embodiment. FIGS. 7a to 7c are drawings for explaining a method of switching between an animation character and text according to one embodiment. FIGS. 8a to 8c are drawings for illustrating a rapid response method according to one embodiment. FIGS. 9a to 9c are drawings for explaining an audio response method according to one embodiment. FIGS. 10a to 10c are drawings for illustrating a customized rapid response method according to one embodiment. FIGS. 11 and FIGS. 12 are drawings for explaining a method for generating an animated character according to one embodiment. FIGS. 13a to 13f are drawings for explaining a character selection method according to one embodiment. FIGS. 14a and FIGS. 14b are drawings for explaining a method of providing a UI screen according to one embodiment. FIGS. 15a and FIGS. 15b are drawings for explaining a voice personalization method for an animation character according to one embodiment. FIGS. 16a and FIGS. 16b are drawings for illustrating a fast response method in a specific type of device according to one embodiment. FIGS. 17a to 17c are drawings for illustrating an audio response method in a specific type of device according to one embodiment. FIGS. 18a to 18e are drawings for explaining a method for creating an animation widget in a specific type of device according to one embodiment. FIGS. 19a to 19c are drawings for explaining a method of providing interaction for a widget in a specific type of device according to one embodiment. FIGS. 20a to 20c are drawings for explaining a method of providing an animation widget when a notification is received in a specific type device according to one embodiment. FIGS. 21a and FIGS. 21b are drawings for explaining a method of generating an animated character in a specific type device according to one embodiment. FIG. 22 is a drawing for explaining an application capable of supporting animation characters according to one embodiment. FIG. 23 is a flowchart illustrating a method for generating an animated character based on received text according to one embodiment. FIG. 24 is a block diagram of an electronic device in a network environment according to various embodiments. The present disclosure will be described in detail below with reference to the attached drawings. The terms used in the embodiments of this disclosure have been selected to be as widely used and general as possible, taking into account their functions within this disclosure; however, these terms may vary depending on the intent of those skilled in the art, case law, or the emergence of new technologies. Additionally, in specific cases, terms have been selected at the applicant's discretion, and in such cases, their meanings will be described in detail in the description section of the disclosure. Therefore, terms used in this disclosure should be defined based on their meanings and the overall content of this disclosure, rather than merely their names (e.g., call, message, analyzed schedule). In this specification, expressions such as “have,” “may have,” “include,” or “may include” indicate the presence of the above features (e.g., numerical values, functions, actions, or components such as parts) and do not exclude the presence of a