Search

US-12625234-B2 - Navigation system and method with continuously updating ML

US12625234B2US 12625234 B2US12625234 B2US 12625234B2US-12625234-B2

Abstract

A marine vessel management system, comprising: receiving input data comprising at least radar input data indicative of a first field of view and imagery input data indicative of a second field of view being at least partially overlapping with said first field of view. Processing the input data to determine data indicative of reflecting object(s) within an overlapping portion of said first field of view. Determining respective locations(s) within said second field of view, where said reflecting object(s) are identified, and obtaining radar meta-data of said reflecting object(s); processing said input imagery data said respective locations in an overlapping portion of said second field of view. Determining image data piece(s) corresponding with section(s) of said imagery data associated with said reflecting object(s). Using said radar meta-data for generating label data and generating output data comprising said image data section(s) and said label data.

Inventors

  • Omer REGEV

Assignees

  • ISRAEL AEROSPACE INDUSTRIES LTD.

Dates

Publication Date
20260512
Application Date
20220608
Priority Date
20210617

Claims (20)

  1. 1 . A method implemented by one or more processors and memory circuit, the method comprising: receiving input data comprising at least radar input data indicative of a first field of view and imagery input data indicative of a second field of view being at least partially overlapping with said first field of view; processing the radar input data to determine data indicative of one or more reflecting objects within an overlapping portion of said first field of view, determining one or more respective locations within said second field of view where said one or more reflecting objects are identified, and obtaining radar meta-data of said one or more reflecting objects; processing said input imagery data said respective locations in an overlapping portion of said second field of view, and determining one or more image data pieces corresponding with one or more sections of said imagery data associated with said one or more reflecting objects, using said radar meta-data for generating label data and generating output data comprising said one or more image data sections and said label data, thereby facilitating connection of said radar meta-data with image data of one or more objects to enable machine learning training for object detection based on said imagery data; and providing said output data comprising one or more image data pieces and labeling data for training processing of one or more artificial intelligence (AI) modules to thereby enable continuous training for object recognition and/or classification in said imagery data.
  2. 2 . The method of claim 1 , wherein said radar meta-data comprises one or more data pieces indicative of radar signature of one or more objects reflecting a radar signal.
  3. 3 . The method of claim 1 , wherein said radar meta-data comprises one or more data pieces selected from the group of: object size, object distance, object closing speed, and object aspect, object location, Angles, azimuth, vector, Doppler, and cross section & signature.
  4. 4 . The method of claim 1 , wherein said AI module is adapted for object detection in marine environment, enabling collision prevention in a marine vessel in accordance with said imagery data.
  5. 5 . The method of claim 4 , wherein said AI module is further configured for receiving location data from one or more location detection units (GPS) and for determining navigational route of said marine vessel.
  6. 6 . The method of claim 5 , wherein said AI module is connectable to steering controls of said marine vessel, thereby enabling at least partially autonomous operation of said marine vessel.
  7. 7 . The method of claim 1 , further comprising processing the radar input data to determine data indicative of one or more reflecting objects within an said first field of view, and upon determining that one or more respective locations where reflecting objects are identified being outside of said second field of view, generating an operation command to obtain imagery data from said one or more respective locations.
  8. 8 . The method of claim 1 , further comprising providing data on position of said one or more reflecting objects and utilizing an automatic identification system (AIS) and said position of said one or more reflecting objects to obtain data on identity of said one or more reflecting objects, using said data on identity for generating additional label data and generating output data comprising said one or more image data sections and said label data and said additional label data.
  9. 9 . A marine vessel management system, comprising: at least one processor and memory circuit, one or more camera units, and one or more radar units, wherein the at least one processor comprises an auto-captain module, an object detection training module, and a training data generator; wherein the auto-captain module comprises a artificial intelligence (AI) module continuously trainable based on labeled image data and is configured to receive imagery data from said one or more camera units, processing said imagery data to determine data on one or more objects within a selected field of view around said marine vessel; wherein the training data generator is configured and operable for receiving input data from said one or more camera units and one or more radar units, said input data comprises at least radar input data indicative of a first field of view and imagery input data indicative a second field of view being at least partially overlapping with said first field of view, processing the radar input data to determine data indicative of one or more reflecting objects within an overlapping portion of said first field of view, determining one or more respective locations within said second field of view where said one or more reflecting objects are identified, and obtaining radar meta-data of said one or more reflecting objects; processing said input imagery data said respective locations in an overlapping portion of said second field of view, and determining one or more image data pieces corresponding with one or more sections of said imagery data associated with said one or more reflecting objects, using said radar meta-data for generating label data and generating output data comprising said one or more image data sections and said label data, thereby facilitating connection of said radar meta-data with image data of one or more objects to enable machine learning training for object detection based on said imagery data; wherein the object detection training module is configured to receive said output labeled data, and update training of said AI module of said auto-captain module for detecting objects based on said output labeled data, thereby enable continuously updating training of said AI module.
  10. 10 . The marine vessel management system of claim 9 , wherein said AI module is adapted for processing input imagery data received from said one or more camera unit and determine data on one or more objects identified in said imagery data, to thereby provide object recognition of one or more objects from said imagery data.
  11. 11 . The marine vessel management system of claim 9 , wherein said radar meta-data comprises one or more data pieces indicative of radar signature of one or more objects reflecting said radar signal.
  12. 12 . The marine vessel management system of claim 9 , wherein radar meta-data comprises one or more data pieces selected from the group of: object size, object distance, object closing speed, object aspect, object location, Angles, azimuth, vector, Doppler, cross section, and signature.
  13. 13 . The marine vessel management system of claim 9 , wherein said AI module is further configured for receiving location data from one or more location detection units (GPS) and for determining navigational route of said marine vessel.
  14. 14 . The marine vessel management system of claim 9 , wherein said auto-captain module is connectable to steering controls of said marine vessel and configured for varying at least one of speed and heading of said marine vessel to thereby enable at least partially autonomous operation of said marine vessel.
  15. 15 . The marine vessel management system of a claim 9 , further comprising processing the radar input data to determine data indicative of one or more reflecting objects within an said first field of view, and upon determining that one or more respective locations where reflecting objects are identified being outside of said second field of view, generating an operation command to obtain imagery data from said one or more respective locations.
  16. 16 . The marine vessel management system of claim 9 , further comprising automatic identification system (AIS) module, wherein said training data generator is further configured to provide data on position of said one or more reflecting objects and obtain from said AIS module data on identity of marine vessels located at said position, using said data on identity for generating additional label data and generating output data comprising said one or more image data sections and said label data and said additional label data.
  17. 17 . A system for generating labeled training data, the system comprising: a processing utility comprising one or more processors, memory unit, and communication module connectable to one or more camera units and one or more radar units; the processing utility is configured for receiving input data comprising at least radar input data indicative of a first field of view and imagery input data indicative of a second field of view being at least partially overlapping with said first field of view; processing the radar input data to determine data indicative of one or more reflecting objects within an overlapping portion of said first field of view, determining one or more respective locations within said second field of view where said one or more reflecting objects are identified, and obtaining radar meta-data comprising one or more data pieces indicative of radar signature of one or more objects reflecting said radar signal; processing said input imagery data said respective locations in an overlapping portion of said second field of view, and determining one or more image data pieces corresponding with one or more sections of said imagery data associated with said one or more reflecting objects, using said radar meta-data for generating label data and generating output data comprising said one or more image data sections and said label data, thereby facilitating connection of said radar meta-data with image data of one or more objects to enable machine learning training for object detection based on said imagery data; wherein said processing utility is further configured to provide said output data comprising one or more image data pieces and labeling data for training processing of one or more AI modules to thereby enable continuous training for object detection in said imagery data.
  18. 18 . The system of claim 17 , wherein said processing utility comprises a radar reflection detector, radar signature processing module, and FOV analyzer; said radar reflection detector is configured to receive input data from the one or more radar units and determine data of one or more radar signal reflections indicative of one or more objects in field of view of the one or more radar units, and location of said one or more objects; the radar signature processing module is configured to receive and process data on said radar signal reflections and determine data on radar signature of said one or more objects; said FOV analyzer is configured to receive input imagery data from said one or more camera units and said data on location of said one or more object, and to process the input imagery data to determine one or more image data pieces associated with said location of said one or more objects; the processing utility generates output data comprising said image data pieces and labeling data associated with said data on radar signature of said one or more objects thereby generating labeled training data for training of one or more artificial intelligence (AI) modules.
  19. 19 . The system of claim 17 , wherein said one or more AI modules being adapted for object detection in marine environment, enabling collision prevention in a marine vessel in accordance with said imagery data.
  20. 20 . The system of claim 17 , wherein said one or more AI modules are connectable to steering controls of said marine vessel, thereby enabling at least partially autonomous operation of said marine vessel.

Description

TECHNOLOGICAL FIELD The present invention relates to technique of automatic navigation, traffic avoidance, and anti-collision using AI (Artificial Intelligence) control technology, and specifically relates to autonomically and automatically updated AI by using automatic and “on the fly” machine learning training in maritime traffic and marine environment. BACKGROUND The raising interest in autonomous vehicle is also directed at autonomous marine vessels. Although marine environment and paths may pose a promising candidate for autonomous navigation and handling, the existing IMO (International Maritime Organization) regulations and rules direct a presence of a 24/7 human lookout by sight on the bridge of the vessel, as to make a full appraisal of the situation and of the risk of collision. Such regulations direct the use of optical imaging and optical image processing as input for automatic or autonomous lookout function and navigational control of the vessel. GENERAL DESCRIPTION Autonomous or automatic control of marine vessels generally requires operation of the vessel in several aspects including navigation, avoidance of obstacles, and resolution of path and traffic conflicts with other vessels. As marine regulations require use of visual input, by the use of human lookout by sight to make a full appraisal of the situation and of the risk of collision. The control over automatic and autonomous navigation of marine vessels requires processing of visual optical inputs. To this end, a proper control system for such automatic or autonomous navigation of a marine vessel is trained for detecting and analyzing various objects in different visual conditions to enable object detection, object classification and determining characteristics of object behavior, as well as route re-planning or temporary route alteration to avoid collision and to timely resolve various navigational conflicts or potential conflicts that may arise. To enable automatic and/or autonomous marine navigation in various conditions, while following the regulation, the present technique provides ongoing learning and training of AI technology by an automatic and “on the fly” machine learning control to enable detection, classification and generating behavior profile of various objects in various visual conditions. To this end, the present invention utilizes a sensing arrangement on the marine vessel including one or more optical imagers (such as one or more camera units) and one or more radar systems, and a navigational control system (control system) comprising one or more computer processors and memory unit. The control system may also comprise local or remote displays and one or more ship control units providing control over ship autopilot and engines throttle, and ship interface units. The control system generally comprises input and output communication ports operatively connected to the sensing arrangement for receiving input image data and input radar data. The control system is generally connected to vessel engines and steering navigation modules to control steering and thrust of the vessel, thereby controlling, piloting, and navigating the vessel. The control system may further include one or more processing modules including for example safety module configured for receiving and processing input data, at least image data, and determining risk levels associated with objects around the vessel or in path thereof and auto-captain module configured for receiving and processing data on vessel and environment around it, applying on or more decision-making algorithms, and generating operation commands to the engines and steering modules. The control system may also comprise additional modules such as efficiency module for monitoring and optimizing fuel use of the vessel; cyber security module configured for monitoring and verifying data communications to limit security breaches. Generally, the control system may include an object analyzing module configured for receiving input imagery data from the one or more imagers, identify one or more objects within the imagery data, determine one or more classification data of so-identified objects and provide corresponding output to at least one of the safety module and auto-captain module. Typically, in marine environment, the object analyzing module may be trained for detecting, recognizing, and classifying various types of marine vessels including e.g., ships, boats, rafts, obstacles, various platforms (e.g., oil rig), etc., as well as analyze behavior profile of such marine vessels over stationary objects. Additionally, the analyzing module may be trained to determine object characteristics such as object lights, smoke, flags, shape, sails, vessel vector and maneuvers, rear water trail, front (nose) water wave, water foam and more. Further, the object analyzing module may be trained for recognizing land regions such as islands and shore. Generally, the object analyzing module, as well as one or more