Search

EP-3948657-B1 - TRACKING AGGREGATION AND ALIGNMENT

EP3948657B1EP 3948657 B1EP3948657 B1EP 3948657B1EP-3948657-B1

Inventors

  • YEH, Wei, Cheng
  • COSSAIRT, Travis, Jon
  • RODGERS, Rachel

Dates

Publication Date
20260506
Application Date
20200330

Claims (15)

  1. A tangible, non-transitory, machine-readable medium, comprising machine-readable instructions that, when executed by one or more processors of a machine, cause the machine to: receive a tracked target context for a first tracked object in an open environment (200) from a first tracking sensor system (204) having a first coverage zone (202A); provide the tracked target context from the first tracking sensor system (204) to a second tracking sensor system (210) in the open environment (200) different than the first tracking sensor system (204) and having a second coverage zone (202B); cause identification of a newly observed tracked target by the second tracking sensor system (210) using the tracked target context from the first tracking sensor system (204), by: filtering-out a subset of a set of candidate identities that the newly observed tracked target may be identified as based upon the context from the first tracking sensor system (204) by determining the subset as a portion of the set of candidate identities tracked at a previous location outside a range of locations identified based upon a time difference between a time the set of candidate identities were tracked at the previous location and a time the newly observed tracked target was observed by the second tracking sensor system (210); providing the set of candidate identities without the subset to the second tracking sensor system (210); determining a prediction confidence score indicative of a confidence level of the identification of the newly observed tracked target; and gathering additional tracking sensor system inputs for another identification of the newly observed tracked target in response to the prediction confidence score failing to meet a confidence threshold.
  2. The machine-readable medium of claim 1, comprising machine-readable instructions that, when executed by the one or more processors of the machine, cause the machine to: generate a blacklist based upon the subset of the set of candidate identities; and provide the blacklist to the second tracking sensor system.
  3. The machine-readable medium of claim 1, comprising machine-readable instructions that, when executed by the one or more processors of the machine, cause the machine to: generate a blacklist based upon the subset of the set of candidate identities; and provide the set of candidate identities without the blacklist to the second tracking sensor system.
  4. The machine-readable medium of claim 1, comprising machine-readable instructions that, when executed by the one or more processors of the machine, cause the machine to gather the additional tracking sensor system inputs by providing a direction to the newly observed tracked target to proceed to a tracking sensor system coverage area.
  5. The machine-readable medium of claim 1, comprising machine-readable instructions that, when executed by the one or more processors of the machine, cause the machine to: receive training data indicative of a group of persons and associated attributes; identify patterns in the associated attributes to identify grouping attributes indicating that persons should be grouped; determine if the first tracked object is associated with the patterns; and in response to determining that the first tracked object is associated with the patterns, identify the first tracked object as part of an active group.
  6. The machine-readable medium of claim 5, comprising machine-readable instructions that, when executed by the one or more processors of the machine, cause the machine to: determine if a second tracked object is a member of the active group, based upon the patterns, by: determining an amount of time the first tracked object and the second tracked object have spent in a threshold proximity to each other; and when the amount of time exceeds a threshold, associating the second tracked object with the active group.
  7. The machine-readable medium of claim 5, comprising machine-readable instructions that, when executed by the one or more processors of the machine, cause the machine to: determine if a second tracked object is a member of the active group, based upon the patterns, by: associating a first weighted probability to the second tracked object, wherein the first weighted probability represents a likelihood that the second tracked object is a member of the active group; associating a second weighted probability to the second tracked object, wherein the second weighted probability represent a second likelihood that the second tracked object is a member of the active group; and when an addition of the first weighted probability and the second weighted probability exceeds a threshold value, associate the second tracked object with the active group.
  8. The machine-readable medium of claim 1, comprising machine-readable instructions that, when executed by the one or more processors of the machine, cause the machine to: receive a control action associated with an interactive open environment feature based on: identification of a second tracked object by the second sensor tracking system based on the tracked target context from the first tracking sensor system; and the second tracked object nearing the interactive open environment feature; and cause implementation of the control action.
  9. The machine-readable medium of claim 8, wherein the interactive open environment feature comprises a display, and wherein the control action causes information associated with the second tracked object to be presented via the display based on the identification of the second tracked object and the second tracked object nearing the display.
  10. The machine-readable medium of claim 8, wherein the second tracked object is associated with a wearable device, and the one or more processors are configured to: update information associated with the second tracked object based on an interaction with the interactive open environment feature by the second tracked object; and instruct the wearable device to display, via the wearable device, an indication that the information associated with the second tracked object has been updated.
  11. The machine-readable medium of claim 10, wherein the information is associated with a virtual game, and wherein the interaction with the interactive open environment feature causes points to be added to a virtual game status associated with the second tracked object.
  12. A computer-implemented method, comprising: receiving tracking sensor system inputs comprising a tracked target context for a first tracked object in an open environment (200) from a first tracking sensor system (204) having a first coverage zone (202A); providing the tracked target context from the first tracking sensor system (204) to a second tracking sensor system (210) different than the first tracking sensor system (204) and having a second coverage zone (202B); causing identification of a newly observed tracked target in the open environment (200) by the second tracking sensor system (210) using the tracked target context from the first tracking sensor system (204) by: filtering-out a subset of a set of candidate identities that the newly observed tracked target may be identified as based upon the context from the first tracking sensor system (204) by determining the subset as a portion of the set of candidate identities tracked at a previous location outside a range of locations identified based upon a time difference between a time the set of candidate identities were tracked at the previous location and a time the newly observed tracked target was observed by the second tracking sensor system; providing the set of candidate identities without the subset to the second tracking sensor system (210); determining a prediction confidence score indicative of a confidence level of the identification of the newly observed tracked target; and gathering additional tracking sensor system inputs for another identification of the newly observed tracked target in response to the prediction confidence score failing to meet a confidence threshold.
  13. The computer-implemented method of claim 12, wherein the first tracked object and the newly observed tracked target comprise a first group of individuals.
  14. A system, comprising: a first tracking sensor system (204), configured to track a first tracked object in a first coverage zone of an open environment (200); a second tracking sensor system (210), configured to track a second tracked object in a second coverage zone of an open environment (200); and a contextual tracking system, configured to: receive a tracked target context for the first tracked object from the first tracking sensor system (204); provide the tracked target context from the first tracking sensor system (204) to the second tracking sensor system (210) different than the first tracking sensor system (204); cause identification of the second tracked object using the tracked target context from the first tracking sensor system (204), wherein the second tracking sensor system (210) is different than the first tracking sensor system (204), by: filtering-out a subset of a set of candidate identities that the newly observed tracked target may be identified as based upon the context from the first tracking sensor system (204) by determining the subset as a portion of the set of candidate identities tracked at a previous location outside a range of locations identified based upon a time difference between a time the set of candidate identities were tracked at the previous location and a time the newly observed tracked target was observed by the second tracking sensor system; providing the set of candidate identities without the subset to the second tracking sensor system (210); determining a prediction confidence score indicative of a confidence level of the identification of the newly observed tracked target; and gathering additional tracking sensor system inputs for another identification of the newly observed tracked target in response to the prediction confidence score failing to meet a confidence threshold.
  15. The system of claim 14, wherein the first tracking sensor system (204), the second tracking sensor system (210) or both, comprise: a light detection and ranging (LIDAR) system, a radio frequency identification (RFID) system, a computer vision system, a Time of Flight (ToF) system, a Millimeter Wave (mmWave) system, or any combination thereof.

Description

BACKGROUND The present disclosure relates generally to tracking systems. More specifically, certain embodiments of the present disclosure relate to aggregation and handoff of tracking system data between tracking systems to facilitate more efficient and effective tracking of objects within an environment. In the Digital Age, with the increase of digital sensors, object tracking has become increasingly desirable. Unfortunately, in large/open environments, user tracking is a very challenging prospect, especially when accurate location and activity tracking is desired. As used herein, open environments refer to areas that allow tracked objects to move in a multitude of directions with relatively little confinement. For example, such environments might include an amusement park, an airport, a shopping mall, or other relatively larger-scale environments that may have multiple tracking coverage zones. Accurate tracking of unique individuals is challenging, especially in open environments and in situations where crowd density presents issues of obstruction where one individual might block another. US2019/043281A1 discloses a system for providing gamification of a destination such as a theme park where guests use a venue app with an optional electronic ticket that allows for both self-serve access into the destination and tracking of individual guests throughout various access points. An extended range ID such as an RFID is provided to the guest either in the electronic ticket or in a wearable. Using combinations of RFID readers, pressure sensors and cameras the system tracks guests down the ride seat and tracks the movements of guided, free-floating and free-ranging vehicles. Using the combination of guest and vehicle tracking information along with information provided by a destination gaming system, the destination guest experience is customized including various effects for rides and attractions. Spot cameras are placed at tracked locations such as rides seats, where images are captured in response to either external triggers generated by the system or guest indications. This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present techniques, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art. SUMMARY Certain embodiments commensurate in scope with the originally claimed subject matter are summarized below. These embodiments are not intended to limit the scope of the disclosure, but rather these embodiments are intended only to provide a brief summary of certain disclosed embodiments. Indeed, the present disclosure may encompass a variety of forms that may be similar to or different from the embodiments set forth below. Embodiments described herein relate to a tracking system that efficiently aggregates and/or communicates tracking data between tracking systems, enabling context of one tracking sensor to enhance tracking of other tracking sensors. More specifically, the contextual information (e.g., location, time, tracked object identities) determined by one tracking sensor may be used to facilitate more efficient and/or more effective tracking of other sensors. For example, such contextual information may result in increased confidence of a tracked object's identity, may result in efficient filtering of possible identities that can be attributed to a tracked object, etc. This may result in increased processing efficiencies and may also enable more granular tracking of objects in an open environment. In a first embodiment tangible, non-transitory, machine-readable medium, including machine-readable instructions according to claim 1 is provided. In a second embodiment, a computer-implemented method according to claim 12 is provided. In a third embodiment, a system according to claim 14 is provided. BRIEF DESCRIPTION OF THE DRAWINGS These and other features, aspects, and advantages of the present disclosure will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein: FIG. 1 is a schematic diagram, illustrating a multi-sensor tracking component with a contextual tracking system, in accordance with an embodiment of the present disclosure;FIG. 2 is a schematic diagram, illustrating an open environment that uses the system of FIG. 1, in accordance with an embodiment of the present disclosure;FIG. 3 is a flowchart, illustrating a process for identifying a tracking context, in accordance with an embodiment;FIG. 4 is a flowchart, illustrating a process for using acquired context to identify context at a subsequent tracking s