Search

JP-7857320-B2 - Presenting facial expressions in virtual meetings

JP7857320B2JP 7857320 B2JP7857320 B2JP 7857320B2JP-7857320-B2

Inventors

  • ジェ-ウォン・チェ
  • サンラック・ユン
  • ジャンフン・チョ
  • ハヌル・キム
  • ヒョンウ・パク
  • スンハン・ヤン
  • キョ・ウン・ファン

Assignees

  • クアルコム,インコーポレイテッド

Dates

Publication Date
20260512
Application Date
20220223
Priority Date
20210514

Claims (15)

  1. A method performed by the processor of a computing device to display facial expressions on an avatar in a virtual meeting, The steps include detecting the user's facial expression based on information received from the sensors of the computing device, The steps include determining whether the detected user's facial expression has been previously approved for display on an avatar in a virtual meeting, In response to the determination that the detected user's facial expression was not previously approved for display on an avatar in the virtual meeting, the steps include generating an avatar that represents a facial expression that is approved for display on an avatar in the virtual meeting, but is different from the detected user's facial expression, In response to the determination that the detected user's facial expression has been previously approved for display on an avatar in the virtual meeting, the steps include generating an avatar that represents a facial expression consistent with the detected user's facial expression, A method comprising the step of presenting the generated avatar in the virtual conference.
  2. The method according to claim 1, wherein, in response to a determination that the detected user's facial expression was not previously approved for display on an avatar in a virtual meeting, the step of generating an avatar representing a facial expression that is approved for display on an avatar in the virtual meeting, but is different from the detected user's facial expression, includes the step of continuing to display the currently displayed avatar.
  3. The method according to claim 1, wherein, in response to a determination that the detected user's facial expression has not been previously approved for presentation on an avatar in a virtual meeting, the step of generating an avatar representing a facial expression that is approved for presentation on an avatar in the virtual meeting, but is different from the detected user's facial expression, includes the step of generating an avatar representing a recently approved facial expression for presentation.
  4. The method according to claim 1, wherein the step of detecting the user's facial expression based on information received from the computing device's sensors includes the step of detecting the user's facial expression based on information received from the computing device's image sensors.
  5. A step of determining whether the detected user's facial expression is approved for display on an avatar in a virtual meeting, The steps include rendering an avatar representing a facial expression consistent with the detected user's facial expression onto a user interface configured to receive approval or rejection from the user, The method according to claim 1, further comprising the step of determining that the detected user's facial expression is approved in response to the reception on the user interface of the computing device of input indicating that the user's facial expression is approved for presentation in the virtual meeting.
  6. The method according to claim 5, further comprising the step of determining that if no response input is received to the user interface within a threshold time period, the detected user's facial expression is approved for presentation on the avatar in the virtual meeting.
  7. The method according to claim 1, further comprising the step of storing in memory an instruction that an avatar representing a facial expression consistent with the detected user's facial expression is approved or not approved for presentation on an avatar in the virtual meeting, in response to the reception on the user interface of the computing device of an input indicating that the detected user's facial expression is approved or not approved for presentation on an avatar in the virtual meeting.
  8. The step of determining whether the detected user's facial expression has been previously approved for display on an avatar in a virtual meeting is: The steps include determining whether the detected user's facial expression is stored in the preset list as approved or not approved, The steps include rendering an avatar representing a facial expression consistent with the detected user's facial expression onto a user interface configured to receive approval or rejection from the user, The method according to claim 1, further comprising the steps of updating the preset list in response to the receipt of an input different from the preset list that indicates whether or not the user's facial expression is approved or not to be presented on the avatar in the virtual meeting.
  9. The method according to claim 1, further comprising the step of determining whether the detected user's facial expression is approved for display on the avatar in a virtual meeting, the step of determining whether the detected user's facial expression is approved for display on the avatar in the virtual meeting based on the user's expressive voice.
  10. The method according to claim 1, wherein the step of presenting the generated avatar in the virtual conference includes, in conjunction with presenting the generated avatar in the virtual conference, the step of rendering a representation of the user's expressive voice in the virtual conference.
  11. A computing device, A means for detecting the user's facial expression based on information received from the sensor of the computing device, Means for determining whether the detected user's facial expression has been previously approved for display on an avatar in a virtual meeting, In response to the determination that the detected user's facial expression was not previously approved for display on an avatar in the virtual meeting, means for generating an avatar that represents a facial expression that is approved for display on an avatar in the virtual meeting, but is different from the detected user's facial expression, In response to the determination that the detected user's facial expression has been previously approved for display on an avatar in the virtual conference, means for generating an avatar that represents a facial expression consistent with the detected user's facial expression, A computing device comprising means for presenting the generated avatar in the virtual conference.
  12. The computing device according to claim 11, wherein, in response to a determination that the detected user's facial expression was not previously approved for display on an avatar in a virtual meeting, the device continues to display the currently displayed avatar, thereby generating an avatar that represents a facial expression approved for display on an avatar in the virtual meeting, but which is different from the detected user's facial expression.
  13. The computing device according to claim 11, wherein, in response to a determination that the detected user's facial expression has not been previously approved for presentation on an avatar in a virtual meeting, the means for generating an avatar representing a facial expression that is approved for presentation on an avatar in the virtual meeting, but is different from the detected user's facial expression, the means for generating an avatar representing a recent facial expression that has been approved for presentation.
  14. A computing device according to claim 11, further comprising means for performing the method described in any one of claims 1 to 10.
  15. A non-temporary processor-readable storage medium storing processor-executable instructions, wherein the instructions cause the processor of a computing device to execute the method described in any one of claims 1 to 10.

Description

Related Application This application claims priority to U.S. Patent Application No. 17/320,627, filed on 14 May 2021, entitled “Presenting A Facial Expression In A Virtual Meeting,” which is incorporated herein by reference in its entirety. Communication networks have enabled the development of applications and services for online meetings and gatherings. Some systems provide a virtual environment that presents visual representations of attendees, known as "avatars," which can range from simplified or cartoon-like images to photorealistic images. Some of these systems include VR devices, such as virtual reality (VR) headsets or other VR equipment, that record user movements and voices. Such systems can generate facial expressions on the user's avatar based on the user's movements and speech. However, the facial expressions, words, and actions of virtual meeting participants may not be relevant to the meeting. Users may react to various things that occur in their real-world environment, such as interruptions from children, pets, or other people, external noise, phone calls, and other distractions. The system may also detect and display facial expressions that users do not wish to show to others in online meetings, such as anger, displeasure, or frustration. Furthermore, the system may inaccurately capture and display facial expressions, resulting in a mismatch between what the user is trying to convey and the facial expression displayed, which may be awkward, embarrassing, or rude. The current system does not provide a mechanism for users to consider or approve the facial expressions they wish to display on their avatars. This is a system block diagram showing an exemplary communication system suitable for implementing any of the various embodiments.This is an exemplary block diagram of the components of a computing device suitable for implementing any of the various embodiments.This is a block diagram of components showing an exemplary computing system architecture suitable for implementing any of the various embodiments.This is a conceptual diagram illustrating various embodiments of methods for presenting facial expressions in a virtual meeting.This is a process flow diagram illustrating various methods for presenting facial expressions in a virtual meeting.This figure shows actions that can be performed as part of a method for presenting facial expressions in a virtual meeting, according to various embodiments.This figure shows actions that can be performed as part of a method for presenting facial expressions in a virtual meeting, according to various embodiments. Various embodiments will be described in detail with reference to the attached drawings. Wherever possible, the same reference numerals will be used throughout the drawings to refer to the same or similar parts. References made to specific examples and implementations are for illustrative purposes only and are not intended to limit the various embodiments or claims. Various embodiments provide methods for displaying facial expressions deemed appropriate by participants on their avatars in a virtual meeting, which may be implemented within a device such as a mobile computing device. Various embodiments enable the computing device to learn user-approved and unapproved facial expressions for display on the user's avatar in a virtual meeting. Various embodiments eliminate the need for additional bulky and expensive peripherals, such as virtual reality headsets and similar devices. Various embodiments improve the operation of computing devices and virtual meeting systems by enabling automatic filtering of facial expressions rendered on participant avatars, thereby improving the conduct of virtual meetings. The terms “components,” “modules,” and “systems” include, but are not limited to, computer-related entities such as hardware, firmware, hardware-software combinations, software, or running software, configured to perform a particular operation or function. For example, a component may be, but is not limited to, a process running on a processor, a processor, an object, an executable file, an execution thread, a program, and/or a computer. As an example, both an application running on a computing device and the computing device itself may be referred to as components. One or more components may reside within a process and/or an execution thread, and components may be localized on one processor or core and/or distributed across two or more processors or cores. In addition, these components may be executed from various non-temporary computer-readable media storing various instructions and/or data structures. Components may communicate by local and/or remote processes, function or procedure calls, electronic signals, data packets, memory reads/writes, and other known computer, processor, and/or process-related communication methods. The term "computing device" is used herein to refer to any or all of the following similar electronic devices, including