Search

US-12619335-B2 - Interaction strength using virtual objects for machine control

US12619335B2US 12619335 B2US12619335 B2US 12619335B2US-12619335-B2

Abstract

The technology disclosed relates to using virtual attraction between hand or other control object in a three-dimensional (3D) sensory space and a virtual object in a virtual space. In particular, it relates to defining a virtual attraction zone of a hand or other control object that is tracked in a three-dimensional (3D) sensory space and generating one or more interaction forces between the control object and a virtual object in a virtual space that cause motion of the virtual object responsive to proximity of the control object to the virtual object and escalation with a virtual pinch or grasp action of the control object directed to a manipulation point of the virtual object.

Inventors

  • David S HOLZ
  • Raffi Bedikian
  • Adrian Gasinski
  • Hua Yang
  • Maxwell Sills
  • Gabriel Hare

Assignees

  • SIM IP HXR LLC

Dates

Publication Date
20260505
Application Date
20190911

Claims (20)

  1. 1 . A method including: defining a manipulation point of an object based, at least in part, on an interaction between portions of the object, wherein a movement of the manipulation point follows a movement of the object, and wherein the manipulation point remains within a proximity of the object as the object moves through a three-dimensional (3D) space; selecting a virtual object based, at least in part, on the manipulation point of the object coming within a range of at least one of the virtual object or a manipulation point of the virtual object, wherein the object is distanced apart from the virtual object; determining an interaction force based, at least in part, on the object and the virtual object; and moving the virtual object based, at least in part, on at least one of: a movement of the manipulation point of the object; or a change detected in the interaction force.
  2. 2 . The method of claim 1 , including: determining a predictive model of the object; tracking motion of the object based, at least in part, on the predictive model, wherein the predictive model includes a position of a calculation point of at least one portion of the object; selecting at least one manipulation point proximate to the virtual object based, at least in part, on the tracked motion and the position of the calculation point; and manipulating the virtual object based, at least in part, on an interaction between the calculation point and the selected at least one manipulation point.
  3. 3 . The method of claim 1 , including: determining a predictive model of the object; and tracking motion of the object based, at least in part on, the predictive model, wherein the predictive model is determined based, at least in part, on a feature of the object.
  4. 4 . The method of claim 1 , including: determining a predictive model of the object; and tracking motion of the object based, at least in part on, the predictive model, wherein the predictive model is determined based, at least in part, on at least one of a brightness of the object.
  5. 5 . The method of claim 1 , including: determining a predictive model of the object; tracking motion of the object based, at least in part on, the predictive model; and applying a constraint factor to the predictive model to eliminate impossible poses of the object based, at least in part, on a physical property of the object.
  6. 6 . The method of claim 1 , including: determining a predictive model of the object; tracking motion of the object based, at least in part on, the predictive model, wherein the predictive model includes a position of a calculation point of at least one portion of the object; determining that the interaction is an outside pinch pose based, at least in part, on a decrease in distance between opposable calculation points of portions of the object; assigning a strength to the outside pinch pose based, at least in part, on a convergence of the calculation points; and manipulating the virtual object based, at least in part, on the strength.
  7. 7 . The method of claim 1 , including: determining a predictive model of the object; tracking motion of the object based, at least in part on, the predictive model, wherein the predictive model includes a position of a calculation point of at least one portion of the object; determining that the interaction is an inside pinch pose based, at least in part, on a change in distance between opposable calculation points of portions of the object; assigning an attraction strength to the inside pinch pose based, at least in part, on a degree of convergence of the calculation points; and manipulating the virtual object based, at least in part, on the attraction strength assigned to the inside pinch pose.
  8. 8 . The method of claim 1 , including: determining a predictive model of the object; tracking motion of the object based, at least in part on, the predictive model, wherein the predictive model includes a position of a calculation point of at least one portion of the object; determining that the interaction is a grab pose based, at least in part, on a convergence of calculation points of portions of the object; assigning a strength to the grab pose based, at least in part, on the convergence of the calculation points; and manipulating the virtual object based, at least in part, on the strength.
  9. 9 . The method of claim 1 , including: determining a predictive model of the object; tracking motion of the object based, at least in part on, the predictive model; and generating data representing a position of the virtual object relative to the predictive model of the object.
  10. 10 . The method of claim 1 including: determining a predictive model of the object; tracking motion of the object based, at least in part on, the predictive model; and generating data representing positions in a space of the virtual object and the predictive model of the object.
  11. 11 . The method of claim 1 , including: determining a predictive model of the object; tracking motion of the object based, at least in part on, the predictive model, wherein the predictive model includes a position of a calculation point of at least one portion of the object; determining a pose based, at least in part, on a convergence of calculation points of one or more portions of the object; assigning a strength to the pose based, at least in part, on the convergence; identifying the pose as a dominant pose based, at least in part, on at least one of the strength or a position of the convergence; and manipulating the virtual object based, at least in part, on the dominant pose.
  12. 12 . The method of claim 1 , including creating an anchor point at a location on the object based, at least in part, on the interaction between two portions of the object, wherein the manipulation point remains within a predetermined distance from the anchor point at the location on the object as the object moves through the 3D space.
  13. 13 . The method of claim 1 , wherein the interaction force is based, at least in part, on a virtual mass of the virtual object.
  14. 14 . The method of claim 1 , wherein the virtual object is separated from points on the object.
  15. 15 . A method including: defining a force applied by an object that is tracked in a three-dimensional (3D) space; defining a manipulation point of the object based, at least in part, on an interaction between portions of the object, wherein a movement of the manipulation point follows a movement of the object, and wherein the manipulation point remains within a proximity of the object as the object moves through the 3D space; selecting a virtual object based, at least in part, on the manipulation point of the object coming within a range of at least one of the virtual object or a manipulation point of the virtual object, wherein the object is distanced apart from the virtual object; determining an interaction force with respect to the object and the virtual object based, at least in part on, the defined force; and moving the virtual object based, at least in part, on at least one of: a movement of the manipulation point of the object; or a change detected in the interaction force.
  16. 16 . The method of claim 15 , including: determining a predictive model of the object; tracking motion of the object based, at least in part, on the predictive model, wherein the predictive model includes a position of a calculation point of at least one portion of the object; selecting at least one manipulation point proximate to the virtual object based, at least in part, on the tracked motion and the position of the calculation point; and manipulating the virtual object based, at least in part, on interaction between the calculation point and the selected at least one manipulation point.
  17. 17 . The method of claim 15 , including: determining a predictive model of the object; and tracking motion of the object, based, at least in part, on the predictive model, wherein the predictive model is determined, based, at least in part, on a feature of the object.
  18. 18 . The method of claim 15 , including: determining a predictive model of the object; tracking motion of the object based, at least in part, on the predictive model; and applying a constraint factor to the predictive model to eliminate impossible poses of the object based, at least in part, on a physical property of the object.
  19. 19 . The method of claim 15 , including: determining a predictive model of the object; tracking motion of the object based, at least in part, on the predictive model, wherein the predictive model includes a calculation point of at least one portion of the object; determining that the interaction is a grab pose based, at least in part, on convergence of calculation points of portions of the object; assigning a strength to the grab pose based, at least in part, on a convergence of the calculation points; and manipulating the virtual object based, at least in part, on the strength.
  20. 20 . A method including: defining a manipulation point of an object based, at least in part, on an interaction between portions of the object, wherein a movement of the manipulation point follows a movement of the object, and wherein the manipulation point remains within a proximity of the object as the object moves through a three-dimensional (3D) space; selecting a virtual object based, at least in part, on the manipulation point of the object coming within a range of at least one of the virtual object or a manipulation point of the virtual object, wherein the object is distanced apart from the virtual object; determining a repulsion force with respect to the object and the virtual object; and moving the virtual object, based, at least in part, on at least one of: an interaction with the manipulation point of the object; or a change detected in the repulsion force.

Description

PRIORITY DATA This application is a continuation of U.S. application Ser. No. 14/541,078, entitled “INTERACTION STRENGTH USING VIRTUAL OBJECTS FOR MACHINE CONTROL”, filed on 13 Nov. 2014 and issued as U.S. Pat. No. 10,416,834 of 17 Sep. 2019, which claims the benefit of U.S. Provisional Patent Application No. 61/905,103, entitled, “INTERACTION STRENGTH USING VIRTUAL OBJECTS FOR MACHINE CONTROL,” filed on 15 Nov. 2013. These applications are hereby incorporated by reference for all purposes. INCORPORATIONS Materials incorporated by reference in this filing include the following: “PREDICTIVE INFORMATION FOR FREE SPACE GESTURE CONTROL AND COMMUNICATION,” U.S. Prov. App. No. 61/871,790, filed 29 Aug. 2013,“PREDICTIVE INFORMATION FOR FREE-SPACE GESTURE CONTROL AND COMMUNICATION,” U.S. Prov. App. No. 61/873,758, filed 4 Sep. 2013,“VELOCITY FIELD INTERACTION FOR FREE SPACE GESTURE INTERFACE AND CONTROL,” U.S. Prov. App. No. 61/891,880, filed 16 Oct. 2013,“VELOCITY FIELD INTERACTION FOR FREE SPACE GESTURE INTERFACE AND CONTROL,” U.S. Non. Prov. application Ser. No. 14/516,493, filed 16 Oct. 2014,“CONTACTLESS CURSOR CONTROL USING FREE-SPACE MOTION DETECTION,” U.S. Prov. App. No. 61/825,480, filed 20 May 2013,“FREE-SPACE USER INTERFACE AND CONTROL USING VIRTUAL CONSTRUCTS,” U.S. Prov. App. No. 61/873,351, filed 3 Sep. 2013,“FREE-SPACE USER INTERFACE AND CONTROL USING VIRTUAL CONSTRUCTS,” U.S. Prov. App. No. 61/877,641, filed 13 Sep. 2013,“CONTACTLESS CURSOR CONTROL USING FREE-SPACE MOTION DETECTION,” U.S. Prov. App. No. 61/825,515, filed 20 May 2013,“FREE-SPACE USER INTERFACE AND CONTROL USING VIRTUAL CONSTRUCTS,” U.S. Non. Prov. application Ser. No. 14/154,730, filed 20 Feb. 2014,“SYSTEMS AND METHODS FOR MACHINE CONTROL,” U.S. Non. Prov. application Ser. No. 14/280,018, filed 16 May 2014,“DYNAMIC, FREE-SPACE USER INTERACTIONS FOR MACHINE CONTROL,” U.S. Non. Prov. application Ser. No. 14/155,722, filed 1 Jan. 2014,“PREDICTIVE INFORMATION FOR FREE SPACE GESTURE CONTROL AND COMMUNICATION,” U.S. Non. Prov. application Ser. No. 14/474,077, filed 29 Aug. 2014,SYSTEMS AND METHODS FOR CAPTURING MOTION IN THREE-DIMENSIONAL SPACE,” U.S. Prov. App. No. 61/724,091, filed 8 Nov. 2012,VEHICLE MOTION SENSORY CONTROL,” U.S. Prov. App. No. 62/005,981, filed 30 May 2014,“MOTION CAPTURE USING CROSS-SECTIONS OF AN OBJECT,” U.S. application Ser. No. 13/414,485, filed 7 Mar. 2012, and“SYSTEM AND METHODS FOR CAPTURING MOTION IN THREE-DIMENSIONAL SPACE,” U.S. application Ser. No. 13/742,953, filed 16 Jan. 2013. TECHNICAL FIELD Embodiments relate generally to machine user interfaces, and more specifically to the use of virtual objects as user input to machines. DISCUSSION Conventional machine interfaces are in common daily use. Every day, millions of users type their commands, click their computer mouse and hope for the best. Unfortunately, however, these types of interfaces are very limited. Therefore, what is needed is a remedy to this and other shortcomings of the traditional machine interface approaches. SUMMARY Aspects of the systems and methods described herein provide for of improved control of machines or other computing resources based at least in part upon determining whether positions and/or motions of an object (e.g., hand, tool, hand and tool combinations, other detectable objects or combinations thereof) might be interpreted as an interaction with one or more virtual objects. Embodiments can enable modeling of physical objects, created objects and interactions with various combinations thereof for machine control or other purposes. The technology disclosed relates to using virtual attraction between hand or other control object in a three-dimensional (3D) sensory space and a virtual object in a virtual space. In particular, it relates to defining a virtual attraction zone of a hand or other control object that is tracked in a three-dimensional (3D) sensory space and generating one or more interaction forces between the control object and a virtual object in a virtual space that cause motion of the virtual object responsive to proximity of the control object to the virtual object and escalation with a virtual pinch or grasp action of the control object directed to a manipulation point of the virtual object. In some embodiments, the technology disclosed further relates to generating a predictive model of the hand and using the predictive model to track motion of the hand. The predictive model includes positions of calculation points of fingers, thumb and palm of the hand. It also relates to dynamically selecting at least one manipulation point proximate to the virtual object based on the motion tracked by the predictive model and positions of one or more of the calculation points and manipulating the virtual object by interaction between at least some of the calculation points of the predictive model and the dynamically selected manipulation point. In one embodiment, the predictive model is generated based on least one of a feat