Search

KR-20260068133-A - USER INTERFACE FOR POSE DRIVEN VIRTUAL EFFECTS

KR20260068133AKR 20260068133 AKR20260068133 AKR 20260068133AKR-20260068133-A

Abstract

The systems and methods of this specification describe a method for capturing video in real time by an image capture device. The system provides a plurality of visual pose hints, identifies first pose information in the video while capturing the video, applies a first series of virtual effects to the video, identifies second pose information, applies a second series of virtual effects to the video, and the second series of virtual effects are based on the first series of virtual effects.

Inventors

  • 알라비, 아미르
  • 라이클리억, 올하
  • 쉬, 신통
  • 솔리친, 조나단
  • 보로노바, 올레시아
  • 야고딘, 아템

Assignees

  • 스냅 인코포레이티드

Dates

Publication Date
20260513
Application Date
20210813
Priority Date
20200813

Claims (20)

  1. As a method, A step of causing a display of a skeletal pose tracking system on a graphical user interface of a computing device - said graphical user interface includes an image of a human body -; A step of receiving a selection of one or more regions of the human body from the computing device; A step of receiving augmented reality effect data for each of the one or more regions of the human body, wherein the augmented reality effect data includes a virtual effect, a trigger condition, and an accuracy threshold; and A step of causing modification of a video including a human user based on the above-mentioned received augmented reality effect data A method including
  2. A method according to claim 1, wherein the video includes first pose information representing a first plurality of joint positions of the human user depicted in the video.
  3. A method according to claim 1, wherein the video includes second pose information representing a second plurality of joint positions of the human user depicted in the video.
  4. A method according to claim 1, wherein the video comprises a first series of virtual effects including a plurality of first augmented reality content items.
  5. In paragraph 4, the method wherein the plurality of first augmented reality content items are applied to the video in real time during capture.
  6. In paragraph 4, the method comprises a second series of virtual effects, wherein the video includes a plurality of second augmented reality content items.
  7. In paragraph 6, A step of storing the video including the first series of virtual effects at a first time and the second series of virtual effects at a second time; and A method further comprising the step of transmitting the above video to a second computing device as an ephemeral message.
  8. In paragraph 1, Step of identifying the hand in the above video; A step of tracking the motion of the hand from a first position to a second position; and A step of modifying the level of refinement of the virtual effect based on the tracked motion of the hand. A method that additionally includes
  9. As a system, processor; and Memory storing instructions that configure the system to perform operations when executed by the above processor Includes, The above operations are, Causing the display of a skeletal pose tracking system on a graphical user interface of a computing device—the said graphical user interface includes an image of the human body—; Receiving a selection of one or more regions of the human body from the computing device; Receiving augmented reality effect data for each of the one or more regions of the human body—the augmented reality effect data includes virtual effects, trigger conditions, and accuracy thresholds—; and Causing modification of a video including a human user based on the above-mentioned received augmented reality effect data A system including
  10. In claim 9, the system comprises a video including first pose information representing a first plurality of joint positions of the human user depicted in the video.
  11. In claim 9, the system comprises a video including second pose information representing a second plurality of joint positions of the human user depicted in the video.
  12. In claim 9, the system comprises a first series of virtual effects, wherein the video comprises a plurality of first augmented reality content items.
  13. In paragraph 12, the system wherein the plurality of first augmented reality content items are applied to the video in real time during capture.
  14. In paragraph 12, the system comprises a second series of virtual effects, wherein the video comprises a plurality of second augmented reality content items.
  15. In Clause 14, the above operations are, Storing the video including the first series of virtual effects at a first time and the second series of virtual effects at a second time; and A system further comprising transmitting the above video to a computing device as a short-term message.
  16. In Paragraph 9, Identifying the hand in the above video; Tracking the motion of the hand from the first position to the second position; and A system further comprising modifying the level of refinement of the virtual effect based on the tracked motion of the hand.
  17. As a non-transient computer-readable storage medium, the non-transient computer-readable storage medium comprises instructions that, when executed by a computer, cause the computer to perform operations, and said operations are Causing the display of a skeletal pose tracking system on a graphical user interface of a computing device—the said graphical user interface includes an image of the human body—; Receiving a selection of one or more regions of the human body from the computing device; Receiving augmented reality effect data for each of the one or more regions of the human body—the augmented reality effect data includes virtual effects, trigger conditions, and accuracy thresholds—; and Causing modification of a video including a human user based on the above-mentioned received augmented reality effect data A non-transient computer-readable storage medium comprising
  18. In paragraph 17, the video is a non-transient computer-readable storage medium comprising first pose information representing a first plurality of joint positions of the human user depicted in the video.
  19. In paragraph 17, the video is a non-transient computer-readable storage medium comprising second pose information representing a second plurality of joint positions of the human user depicted in the video.
  20. In paragraph 17, the video is a non-transient computer-readable storage medium comprising a first series of virtual effects including a plurality of first augmented reality content items.

Description

User Interface for Posed-Driven Virtual Effects Claim of priority This application claims the benefit of priority to U.S. Provisional Application No. 62/706,391 filed August 13, 2020, the entirety of which is incorporated herein by reference. In many videos today, effects such as screen shake and color correction are added during post-processing after the video is filmed. This is particularly popular in dance videos that are repeated by different creators. In drawings that are not necessarily drawn in a fixed proportion, the same reference numbers may describe similar components in different views. To facilitate the identification of a discussion of any specific element or action, the top digit or numbers of a reference number indicate the drawing number where that element is first introduced. Some embodiments are illustrated in the accompanying drawings as examples rather than limitations. FIG. 1 is a schematic representation of a networked environment in which the present disclosure may be arranged, according to some examples. Figure 2 is a schematic representation of a messaging system according to some examples having functionality on both the client side and the server side. Figure 3 is a schematic representation of a data structure as maintained in a database, according to some examples. Figure 4 is a schematic representation of a message according to some examples. Figure 5 is a flowchart of an access-limiting process according to some examples. FIG. 6 is a flowchart of a method for capturing video in real time by an image capture device according to some examples. Figure 7 is a schematic representation of a skeletal pose system according to some examples. Figure 8 is an exemplary user behavior flow of a skeletal pose system according to some examples. Figure 9 is an exemplary user behavior flow of a skeletal pose system according to some examples. Figure 10 is an exemplary user behavior flow of a skeletal pose system according to some examples. FIG. 11 is an exemplary user behavior flow of a skeletal pose system according to some examples. FIG. 12 is a schematic representation of a skeletal pose system according to some exemplary embodiments. FIG. 13 is a flowchart of a method for capturing video in real time by an image capture device according to some examples. FIGS. 14 through 19 are exemplary user interfaces of a skeletal pose system according to some exemplary embodiments. FIG. 20 is a block diagram illustrating a software architecture in which examples can be implemented. FIG. 21 is a schematic representation of a machine in the form of a computer system in which a set of instructions can be executed to enable the machine to perform any one or more of the methodologies discussed in this specification, according to some examples. The proposed systems and methods describe a skeletal posing system that uses human movements to drive visual effects using augmented reality (AR). For example, the skeletal posing system detects the user's pose (e.g., how the user's body is positioned and the angles between each joint) to "trigger" a virtual effect. In another example, the skeletal posing system tracks the user's hands or joints to allow the user to control the level of the virtual effect they desire. In one example, the skeletal posing system detects the user's hand relative to a reference point to trigger a virtual effect (e.g., if the user moves their hand toward the corner of the camera viewfinder, the virtual effect will be triggered). In another example, the skeletal posing system detects hand gestures to trigger a virtual effect. The skeletal posing system can also link multiple virtual effects together as a sequence of effects. Networked computing environment FIG. 1 is a block diagram illustrating an exemplary messaging system (100) for exchanging data (e.g., messages and associated content) over a network. The messaging system (100) includes multiple instances of client devices (106), each of which hosts multiple applications including messaging clients (108). Each messaging client (108) is coupled to communicate with other instances of messaging clients (108) and messaging server systems (104) over a network (102) (e.g., the Internet). A messaging client (108) can communicate and exchange data with another messaging client (108) and a messaging server system (104) through a network (102). The data exchanged between messaging clients (108) and between a messaging client (108) and a messaging server system (104) includes functions (e.g., commands that invoke functions) as well as payload data (e.g., text, audio, video, or other multimedia data). The messaging server system (104) provides server-side functionality to a specific messaging client (108) via the network (102). Although specific functions of the messaging system (100) are described herein as being performed by the messaging client (108) or by the messaging server system (104), the location of specific functions within the messaging c