Search

EP-4738252-A1 - OPTICAL FLOW TRANSLATION ESTIMATION FOR INSIDE-OUT LOCATION TRACKING AND MAPPING SYSTEM

EP4738252A1EP 4738252 A1EP4738252 A1EP 4738252A1EP-4738252-A1

Abstract

Techniques for optical flow translation estimation by an inside-out location tracking system may include generating an updated distance parameter by optimizing a distance parameter using a translation vector and a set of other fixed terms of a homography for a set of matched image point pairs in an image, determining whether to keep or to discard the updated distance parameter, optimizing the translation vector using a current distance parameter, either the updated distance parameter or a prior distance parameter that was retained, thereby generating an updated translation vector. An optical flow translation method also may include evaluating for convergence, and iteratively optimizing the distance parameter and the translation vector until there is convergence. Once there is convergence, a current updated translation may be output. Convergence may depend on one or more predetermined thresholds relating to a size of parameter updates, an error reduction, and/or a number of iterations.

Inventors

  • LONG II, JOHN DAVIS

Assignees

  • Qwake Technologies, Inc.

Dates

Publication Date
20260506
Application Date
20251007

Claims (19)

  1. A method for optical flow translation estimation by an inside-out location tracking system, the method comprising: optimizing a prior distance parameter using a translation vector and a set of other fixed terms of a homography for a set of matched image point pairs in an image, thereby generating an updated distance parameter; determining a provided distance parameter comprising either the updated distance or the prior distance parameter based on a determination of whether to keep or to discard the updated distance parameter, respectively; optimizing the translation vector using the provided distance parameter and the set of other fixed terms of the homography, thereby generating an updated translation vector; evaluating for convergence; if there is no convergence, iteratively optimizing the provided distance parameter using the updated translation vector and then optimizing the translation vector; if there is convergence, terminating the iterative algorithm; and outputting a current updated translation comprising the updated translation vector from a most recent iteration of the iterative algorithm.
  2. The method in claim 1, wherein optimizing the prior distance parameter comprises: generating a homogeneous mapping of an observed point in a source image into a target image by a rotation-only homography; generating a projected point in the target image by 2D projection; estimating a first norm of a rotation-only optical flow between the observed point and the projected point; applying a homography reflecting the prior distance parameter to the homogeneous mapping of the observed point; generating a modeled point in the target image by 2D projection; estimating a second norm of a modeled conditional optical flow between the observed point and the modeled point; and updating the current distance parameter based on a ratio of the first norm and the second norm, thereby generating the updated distance parameter.
  3. The method in claim 1, further comprising generating an updated point using the updated distance parameter.
  4. The method of claim 3, wherein the determination of whether to keep or to discard the updated distance parameter comprises: comparing a first error between the updated point and an observed point and a second error between a modeled point and the observed point; and keeping the updated distance parameter if the first error is less than the second error.
  5. The method of claim 1, wherein convergence comprises one, or a combination, of (a) a size of a plurality of parameter updates falls below a predetermined size threshold value, (b) an error reduction falls below a predetermined error reduction threshold, and (c) a number of iterations exceeds a maximum iterations threshold.
  6. The method of claim 1, wherein optimizing the translation vector comprises: determining a transformation between 2D homogeneous coordinates as a function of a homogeneous 2D point in a camera frame; determining a partial derivative with respect to a translation vector and a 2D projected point; defining a regularized linear system, for which a solution may be determined using a robust linear system solver; and generating an updated translation vector.
  7. The method of claim 6, wherein the robust linear system solver comprises a Levenberg-Marquardt algorithm.
  8. The method of claim 1, further comprising providing the updated translation vector to a downstream mapping module in an autonomous navigation system.
  9. The method of claim 1, further comprising providing the updated translation vector to a downstream mapping module in a medical imaging system.
  10. The method of claim 1, further comprising providing the updated translation vector to a downstream mapping module in a robotics system.
  11. The method of claim 1, wherein the data associated with the translation vector and the distance parameter is stored using an associative data structure.
  12. A system for optical flow translation estimation for inside-out location tracking, the system comprising: a memory comprising non-transitory computer-readable storage medium configured to store instructions and data, the data being stored in an associative data structure; and a processor communicatively coupled to the memory, the processor configured to execute instructions stored on the non-transitory computer-readable storage medium to: optimize a prior distance parameter using a translation vector and a set of other fixed terms of a homography for a set of matched image point pairs in an image, thereby generating an updated distance parameter; determine a provided distance parameter comprising either the updated distance or the prior distance parameter based on a determination of whether to keep or to discard the updated distance parameter, respectively; optimize the translation vector using the provided distance parameter and the set of other fixed terms of the homography, thereby generating an updated translation vector; evaluate for convergence; if there is no convergence, iteratively optimize the provided distance parameter using the updated translation vector and then optimizing the translation vector; if there is convergence, terminate the iterative algorithm; and output a current updated translation comprising the updated translation vector from a most recent iteration of the iterative algorithm.
  13. The system of claim 12, wherein the associative data structure comprises a tracking grid configured to update information about camera and scene points.
  14. The system of claim 12, wherein the associative data structure comprises a tracking grid configured to eliminate and insert new cameras and scene points.
  15. The system of claim 12, wherein the associative data structure comprises a tracking grid configured to evaluate a quality of a tracked scene point.
  16. The system in claim 12, wherein the data comprises translation data associated with an image.
  17. The system of claim 12, wherein the data comprises distance data associated with an image.
  18. The system of claim 12, wherein the data is associated with a homography.
  19. The system of claim 12, wherein the data is associated with predetermined thresholds.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation-in-part to U.S. Patent Application No. 17/685,590 entitled "Estimating Camera Motion Through Visual Tracking In Low Contrast High Motion Single Camera Systems," filed March 3, 2022, which claims the benefit of U.S. Provisional Application No. 63/156,246, filed on March 3, 2021, all of which are hereby incorporated by reference in their entirety. BACKGROUND OF INVENTION In high stress and oftentimes hazardous environments-firefighting, accident scene, search and rescue, disaster relief, oil and gas, fighter pilots, mining, police or military operation, special operations, and the like-workers and other personnel often need to navigate as a team in an environment where it is very difficult, if not impossible, for team members to locate each other through visual or verbal means. Often team members are too dispersed, either due to hazards, obstacles, or size of operating location, to maintain visual or verbal contact. Even where radio contact is available, in many hazardous environments (e.g., fire, military engagement, disaster environments) it may not be possible for a team member to accurately describe their location, particularly relative to others to aid in navigating quickly and efficiently to a desired location. Also, the operating locations might be remote where conventional location tracking technologies (e.g., GPS and cellular) are unreliable (i.e., intermittent or insufficient resolution). Other persons (e.g., jogger, hiker, adventurer) also trek into remote areas and often get lost in locations where conventional location tracking technology is unreliable. While conventional GPS and cellular triangulation methods work well enough within urban environments, they often perform poorly in remote locations or in a disaster situation. Many conventional existing team location tracking and mapping solutions require outside-in location tracking infrastructure, relying on external location services, such as GPS. Outside-in location tracking systems require infrastructure (e.g., GPS satellites, warehouse cameras, emitters, etc.) that is often lacking in these environments. Sparse feature tracking requires high quality images. Known camera-based inside-out team location tracking systems assume high-quality visible light images (i.e., for extracting sparse features, which are used for matching across time in order to estimate camera motion and scene structure). Since the hazardous or disaster environments in which emergency responders and critical workers often need to operate typically do not have access to external location services and cannot accommodate the capture of high-quality visible light images in real time, these conventional solutions are of limited use to them. Thus, there is a need for an improved inside-out location tracking and mapping system. BRIEF SUMMARY The present disclosure provides techniques for optical flow translation estimation by an inside-out location tracking and mapping system. A method for optical flow translation estimation by an inside-out location tracking and mapping system may include: optimizing a prior distance parameter using a translation vector and a set of other fixed terms of a homography for a set of matched image point pairs in an image, thereby generating an updated distance parameter; determining a provided distance parameter comprising either the updated distance or the prior distance parameter based on a determination of whether to keep or to discard the updated distance parameter, respectively; optimizing the translation vector using the provided distance parameter and the set of other fixed terms of the homography, thereby generating an updated translation vector; evaluating for convergence; if there is no convergence, iteratively optimizing the provided distance parameter using the updated translation vector and then optimizing the translation vector; if there is convergence, terminating the iterative algorithm; and outputting a current updated translation comprising the updated translation vector from a most recent iteration of the iterative algorithm. In some examples, optimizing the prior distance parameter may include: generating a homogeneous mapping of an observed point in a source image into a target image by a rotation-only homography; generating a projected point in the target image by 2D projection; estimating a first norm of a rotation-only optical flow between the observed point and the projected point; applying a homography reflecting the prior distance parameter to the homogeneous mapping of the observed point; generating a modeled point in the target image by 2D projection; estimating a second norm of a modeled conditional optical flow between the observed point and the modeled point; and updating the current distance parameter based on a ratio of the first norm and the second norm, thereby generating the updated distance parameter. In some examples, the method may also