KR-20260064734-A - Virtual manipulation of augmented and virtual reality objects
Abstract
Systems and methods are provided. For example, the method includes the step of determining the position of a user's hand and the step of identifying a manipulation gesture performed by a user targeting a virtual object. The method also includes the step of determining a three-dimensional (3D) viewpoint based on the position of the user's hand when the manipulation gesture is performed, and the step of determining a 3D endpoint based on the movement of the user's hand from the viewpoint. The method additionally includes the step of deriving a 3D vector based on the 3D viewpoint and the 3D endpoint, and the step of applying an action to a targeted virtual object based on the 3D vector, wherein the targeted virtual object is at a distance greater than the reach of the user's arm.
Inventors
- 스퐁, 메이슨
Assignees
- 스냅 인코포레이티드
Dates
- Publication Date
- 20260507
- Application Date
- 20240906
- Priority Date
- 20230907
Claims (20)
- As a system, One or more hardware processors; and It includes at least one memory storing instructions that cause one or more hardware processors to perform operations, and the operations are: Determining the position of the user's hand; Identifying a manipulation gesture performed by the user targeting a virtual object; Determining a three-dimensional (3D) origin point based on the position of the user's hand when the above operation gesture is performed; Determining a 3D end point based on the movement of the user's hand from the above point in time; Deriving a 3D vector based on the above 3D viewpoint and the above 3D endpoint; and A system comprising applying an action to a targeted virtual object based on the above 3D vector, wherein the targeted virtual object is at a distance greater than the user's arm reach.
- A system according to claim 1, wherein applying the action includes imparting speed to the targeted virtual object based on the 3D vector, and the speed imparted to the targeted virtual object has a direction and magnitude corresponding to the vector direction of the 3D vector.
- In paragraph 2, the system, wherein the size is based on the length of the 3D vector or the hand speed of the user.
- A system according to paragraph 2, wherein imparting the speed comprises moving the targeted virtual object in the direction and at the speed in an augmented reality (AR) or virtual reality (VR) environment.
- A system according to claim 4, comprising stopping the movement of the targeted virtual object based on identifying a stop gesture performed by the user.
- A system according to claim 1, wherein the operation gesture comprises a hand gesture, a body gesture, a voice command, or a combination thereof.
- A system according to claim 6, wherein the hand gesture includes a pinch gesture, and determining the 3D endpoint includes determining a point within space where the user stops moving the user's hand while maintaining the pinch gesture.
- A system according to claim 7, wherein identifying the operation gesture includes using an artificial intelligence model to determine that the operation gesture is the pinch gesture.
- In claim 8, the system comprises a deep learning model configured such that the artificial intelligence model detects hand landmarks directly from camera images.
- In paragraph 1, Determining a second 3D viewpoint based on the stopping position of the user's hand when the user stops moving the hand; Determining a second 3D endpoint based on the second movement of the user's hand from the second 3D viewpoint; Deriving a second 3D vector based on the second 3D viewpoint and the second 3D endpoint; and A system comprising operations for applying a second action to a targeted virtual object based on the second 3D vector.
- A system according to claim 10, wherein applying the second action comprises giving speed to the targeted virtual object based on the second 3D vector, the speed given to the targeted virtual object has a direction corresponding to the vector direction of the second 3D vector and a magnitude based on the length of the second 3D vector, and giving speed comprises moving the targeted virtual object in the direction and speed in an augmented reality (AR) or virtual reality (VR) environment.
- A system according to claim 1, wherein determining the position of the user's hand comprises observing the user's hand through one or more camera sensors, radar sensors, light detection and ranging (lidar) sensors, or a combination thereof, included in an AR device, VR device, or a combination thereof worn by the user.
- A system according to claim 12, wherein determining the position of the user's hand comprises observing the user's hand through one or more camera sensors, radar sensors, lidar sensors, or a combination thereof, positioned outside the AR device and the VR device.
- A system according to claim 1, wherein applying the action to the targeted virtual object includes executing commands for a joystick driver.
- In paragraph 14, the commands for the joystick driver are a system that emulates a physical joystick.
- A system according to claim 1, comprising deriving that the user is targeting the virtual object based on identifying a targeting gesture performed by the user targeting the virtual object.
- In paragraph 16, the targeting gesture comprises a hand gesture, a body gesture, a voice command, or a combination thereof, in a system.
- In claim 17, the system comprises a hand gesture including an extended palm pointing toward the targeted virtual object.
- As a method, Step of determining the position of the user's hand; A step of identifying a manipulation gesture performed by the user targeting a virtual object; A step of determining a three-dimensional (3D) viewpoint based on the position of the user's hand when the above operation gesture is performed; A step of determining a 3D endpoint based on the movement of the user's hand from the above point in time; A step of deriving a 3D vector based on the above 3D viewpoint and the above 3D endpoint; and A method comprising the step of applying an action to the targeted virtual object based on the above 3D vector, wherein the targeted virtual object is at a distance greater than the user's arm reach.
- A non-transient computer-readable storage medium storing instructions, wherein the instructions, when executed by at least one processor, cause the at least one processor to perform operations, and the operations are: Determining the position of the user's hand; Identifying a manipulation gesture performed by the user targeting a virtual object; Determining a three-dimensional (3D) viewpoint based on the position of the user's hand when the above operation gesture is performed; Determining a 3D endpoint based on the movement of the user's hand from the above point in time; Deriving a 3D vector based on the above 3D viewpoint and the above 3D endpoint; and A non-transient computer-readable storage medium comprising applying an action to the targeted virtual object based on the 3D vector, wherein the targeted virtual object is located at a distance greater than the reach of the user's arm.
Description
Virtual manipulation of augmented and virtual reality objects Claim of priority This patent application claims the benefit of priority to U.S. Application No. 18/463,113 filed September 7, 2023, the entirety of which is incorporated by reference into this specification. Augmented reality (AR) and virtual reality (VR) systems enable the display of specific rendered content, such as three-dimensional (3D) content. VR systems provide the ability to replace the surrounding environment with a virtual environment containing 3D content. AR systems provide the ability to display 3D content by overlaying it, for example, onto the user's real-time surrounding environment. In drawings that are not necessarily drawn to actual scale, similar numbers may describe similar components within different views. To facilitate the identification of any specific element or act, the top digit or numbers in a reference number indicate the drawing number where the element is first introduced. Some non-limiting examples are illustrated in the drawings of the attached drawings: FIG. 1 is a schematic representation of a networked environment in which the present disclosure according to some examples may be arranged. Figure 2 is a block diagram of an example of a virtual joystick system according to some examples. Figure 3 is a schematic representation of a messaging system having both client-side and server-side functionality according to some examples. Figure 4 is a schematic representation of a data structure as maintained in a database, according to some examples. Figure 5 is a schematic representation of a message according to some examples. FIG. 6 illustrates a system including a head-wearable device according to some examples. FIG. 7 illustrates a user manipulating a virtual object according to some examples. FIG. 8 illustrates a vector diagram showing exemplary vector transformations according to some examples. FIG. 9 illustrates a process suitable for manipulating virtual objects according to some examples. FIG. 10 is a schematic representation of a machine in the form of a computer system in which a set of instructions can be executed to enable the machine to perform any one or more of the methodologies discussed in this specification, according to some examples. FIG. 11 is a block diagram illustrating a software architecture in which examples can be implemented. Virtual Reality (VR) and/or Augmented Reality (AR) systems provide the rendering and display of various virtual objects. VR systems, for example, replace the surrounding environment with a fully virtual environment containing virtual objects. AR systems retain parts of the surrounding environment for display and augment the environment through specific rendered objects containing virtual objects. Since traditional input devices such as mice and keyboards are not suitable for AR/VR use, these systems will benefit from interaction techniques for manipulating virtual objects. One approach is to track the user's hands and map them directly to virtual hands, allowing virtual objects to be grasped and manipulated as if they were real objects. However, this technique is more limited because virtual objects are restricted to being within the reach of the arms. The present invention discloses "virtual joystick" techniques for manipulating virtual objects without these limitations. In certain examples, the virtual joystick enables users to manipulate virtual objects in AR or VR environments without being limited by the reach of their arms or requiring a physical controller. To use the virtual joystick, the user performs a targeting gesture, such as reaching toward an object, to target it. A manipulation gesture, such as a "pinch," then initiates a "joystick" mode. This manipulation gesture establishes a vector starting point in 3D space. As the user moves their hand after initiating the gesture, a vector is generated from the 3D starting point to the current hand position. The vector applies velocity to the targeted object based on the vector's direction and magnitude. The further the user moves their hand from the starting point, the greater the velocity applied to the object. The application of velocity may include moving the virtual object. These techniques enable users to move virtual objects precisely over unlimited distances by controlling vectors through hand movements. Other uses of vectors may include zooming in or out, spatial navigation, gameplay, or any other behaviors that a physical joystick can perform. The user performs the same or a different gesture to stop virtual joystick control. Thus, the virtual joystick provides an intuitive way to manipulate virtual objects without physical constraints by converting hand movements, for example, into vectors. Networked computing environment FIG. 1 is a block diagram illustrating an exemplary interaction system (100) for facilitating interactions (e.g., exchange of text messages, execution of text audio and video calls, o