EP-4738041-A1 - TARGETING APPARATUS AND METHOD FOR USING INFORMATION-THEORY ENABLED TARGET INDICATORS IN GPS-DENIED ENVIRONMENTS
Abstract
The invention provides precise targeting in GPS-denied environments using a plurality of Computer-Readable Image Markers (CRIMs) deployed around a target and imaged from altitude. Each CRIM carries a unique code and may include a moisture-absorbing anchor to stabilise its pose. Wide-area imagery is processed to select a stable subset of CRIMs whose relative geometry is unchanged; projective geometry then yields target coordinates for autonomous navigation. An attack drone uses onboard detection of CRIMs to update a homography in real time, ignoring displaced or defaced markers. Optional near-IR and UV-fluorescent embodiments improve night or low-contrast performance; a low-rate link (10-100 bps) can provide infrequent coordinate refreshes without continuous video. The approach reduces compute and bandwidth demand, is robust to marker loss or spoofing, and enables sub-metre terminal guidance with low unit cost.
Inventors
- Cumpson, Peter
Assignees
- Cumpson, Peter
Dates
- Publication Date
- 20260506
- Application Date
- 20251125
Claims (15)
- A system for autonomous targeting of munitions using Computer-Readable Image Markers (CRIMs), comprising: 1. a plurality of CRIMs, each having a unique identifier and constructed from lightweight, durable materials; 2. a chemical anchor integrated with each CRIM, capable of absorbing environmental moisture to increase its weight and stabilize its ground position; 3. an aircraft configured to deploy the plurality of CRIMs around a predefined target area; 4. a high-altitude imaging device configured to capture images of the CRIMs and the target area; 5. a processing unit programmed to analyze the relative stability of each CRIM's position, and to generate a digital map marking the target and CRIMs, wherein the processing unit is configured to ignore CRIMs that exhibit relative positional changes.
- The system of claim 1, wherein the processing unit utilizes projective geometry to determine the coordinates of the target based on the relative positions of stable CRIMs.
- The system of claim 1, wherein information-theoretic principles are applied to validate the relative stability of CRIMs, enabling target localization despite potential movement or destruction of a subset of CRIMs.
- A method for autonomous navigation of a drone to a target in a GPS-denied environment, comprising the steps of: 1. distributing a plurality of CRIMs around the target area by deploying them from an aircraft or drone; 2. capturing an image of the CRIMs and the target using a high-altitude imaging device; 3. processing the captured image to identify the locations of the CRIMs and generating a digital map for navigation; 4. programming the drone to utilize only CRIMs that maintain stable relative positions for autonomous navigation to the target.
- The method of claim 4, wherein the processing unit employs a probabilistic model to assess and confirm CRIM stability based on observed relative positions.
- The system of claim 1, wherein the chemical anchor comprises a water-absorbing material selected from a group including sodium polyacrylate, calcium chloride, and lithium chloride.
- The system of claim 1, wherein the CRIMs are manufactured to be selectively reflective in the near-infrared spectrum, allowing detection by imaging devices equipped with corresponding filters.
- The method of claim 4, further comprising the step of periodically updating the drone's target coordinates based on real-time CRIM positioning data from low-bandwidth communication channels.
- The system of claim 1, wherein the CRIMs are printed with inks that fluoresce under ultraviolet light, facilitating target identification in low-light or nighttime environments.
- A drone according to claim 1, wherein the drone comprises: • a GPS-independent navigation unit, • an imaging device capable of capturing CRIMs, • a processing unit that evaluates CRIM data and determines CRIM reliability based on their relative positions.
- The system of claim 1, wherein the processing unit disregards CRIMs that have been defaced, displaced, or exhibit irregularities in relative positioning.
- The method of claim 4, wherein the CRIMs are designed with a Hamming distance of at least five, reducing the likelihood of false-positive identification.
- The system of claim 1, wherein the CRIMs include a camouflage layer that reflects specific wavelengths invisible to the human eye, but identifiable by imaging devices equipped with compatible filters.
- The method of claim 4, further comprising using metameric printing techniques for CRIMs, making them inconspicuous in visible light but detectable in specific spectral ranges such as near-infrared.
- The system of claim 1, wherein each CRIM includes dual-sided coding, allowing identification regardless of landing orientation.
Description
Field of the Invention The present invention relates to systems and methods for autonomous navigation and targeting of munitions using Target Indicators (TIs) comprising Computer-Readable Image Markers (CRIMs) that may be of particular advantage in environments where GPS signals are denied. More specifically, it addresses targeting challenges faced by small drones, enhancing their operational reliability in contested environments. There are civilian applications in the drone-delivery of humanitarian aid in disaster mitigation. Related Applications This application claims priority from UK patent application no. GB2416317.2 filed 5th November 2024 and Australian patent application no. AU2025900505, filed 21 February 2025. Background In modern warfare, small unmanned aerial vehicles (UAVs), often referred to as Micro Air Vehicles (MAVs) or Mini Uncrewed Air Systems (MUAS), face significant challenges when navigating and targeting in GPS-denied environments due to electronic warfare and jamming. Traditional navigation methods, including visual navigation using natural landmarks, can be error-prone and resource-intensive, to quote a recent review: "one of the greatest hurdles to visual localization is that the computational requirements can easily exceed the resources available on a simple robot. To get around this problem, there are four different approaches: offload the computation to an external computer, utilize new technology, reduce the computational burden in software, and increase the processing power available to the robot" [1]. Given the challenges of offloading computation when the electromagnetic (EM) spectrum is contested, and increasing the processing power available when there are weight and power restriction, we propose a new technology that reduces the computational burden in software. There is a recognized problem in targeting specific battlefield targets with airborne drones (see for example newspaper articles [2]). The last mile, or few miles, to the battlefront is an important area. Electronic counter-measures can prevent effective communication with drones, or limit it to low data-rates, and certainly confuse GPS and similar navigational systems, so that any operator can find it difficult to direct a drone visually on the right target using (unreliable or absent) video transmissions from the drone. Often that means the drone operator must be within visual range of the target so as not to have to use a video link that could be compromised. This in turn puts the operator in more danger than they need be. Ideally the operator should be located in relative safety far behind the front line. There is some discussion of this problem in the literature, but it is often (perhaps euphemistically) couched in terms of "docking" rather than targeting[3]. One approach to solve this problem is to use "artificial intelligence" or "machine learning" techniques to give the computer within the drone the capability to navigate using reference points from the natural environment (roads, rivers, trees, hedgerows etc). This is expensive - requiring much more powerful hardware and software than would otherwise be required to steer the drone alone. Trees can look similar. Buildings can look almost identical. In war even more visual interpretation errors can occur, as features in the landscape can change rapidly on the battlefield, and survey photos of the area can rapidly become out of date. Even things such as trees losing their leaves or long afternoon shadows can cause errors that take massive training sets to reduce. Another approach is to employ inertial sensors and perhaps gyroscopes, often fabricated using Micro ElectroMechanical Systems (MEMS) techniques. Sometimes this is combined with optical measurements using "data fusion" methods[4]. However, the MEMS accelerometers and gyros that can be put onto a drone cheaply are not very accurate when their outputs are integrated over time to give position. The result is that the location accuracy of these methods is inadequate. A small drone with a small explosive payload needs to be directed with great precision, simply resulting from its small size. A grenade-sized explosive needs to be detonated within about a metre of the target. This is currently impossible over distances of 1km or more using cheap inertial sensors, though data fusion with visual odometry may help. A larger payload, or a higher accuracy inertial sensor, would both be much more expensive to deploy. The emphasis above on inexpensive methods is timely. It is a result of developments on battlefields since about 2020. Imagine, for a moment, that you are a soldier in a dugout somewhere in eastern Ukraine today. Stockpiles of expensive missiles are exhausted. In front of you are three small drones, each costing about US$1,000. Your task is eliminating an artillery piece (or a radar system, or a battalion HQ) tomorrow. You know that electronic warfare from your opponent means that, on average, only on