Search

US-12621794-B2 - System and method for road monitoring

US12621794B2US 12621794 B2US12621794 B2US 12621794B2US-12621794-B2

Abstract

A method and a system for road monitoring, comprises the steps of receiving, via a communication module, a first point cloud data from one or more designated road side units (RSUs) from one or more of a plurality of RSUs located in a defined geographical area; receiving, via the communication module, a second point cloud data from one or more designated vehicle on-board data processing units of vehicles located in the defined geographical area; processing, via a processing module, the first point cloud data and the second point cloud data to generate a processed point cloud data; and transmitting, via the communication module, information derived from the processed point cloud data to one or more of the RSUs and/or the vehicle on-board data processing units of vehicles located in the defined geographical area.

Inventors

  • Jiashi FENG
  • Alpamys Urtay
  • Hang Chen
  • Xinghua Zhu
  • Dongzhe Su
  • Shijun Fan

Assignees

  • HONG KONG APPLIED SCIENCE AND TECHNOLOGY RESEARCH INSTITUTE CO., LTD.

Dates

Publication Date
20260505
Application Date
20230302

Claims (16)

  1. 1 . A method for detecting blind areas for vehicles in a traffic environment of a defined geographical area, comprising: receiving, via a communication module of a Vehicle-to-Everything (V2X) system, a first point cloud data from one or more designated road-side units (RSUs) located in the defined geographical area; processing, via a processing module of the V2X system, at least the received first point cloud data to identify any potential blind areas in the traffic environment; in response to a request from the communication module to one or more designated vehicle on-board data processing units of vehicles located in the defined geographical area, receiving, via the communication module, a second point cloud data from said one or more designated vehicle on-board data processing units; processing, via the processing module, the first point cloud data and the second point cloud data by comparing said data to generate merged data comprising a processed point cloud data representing a global view of the traffic environment in the defined geographical area including view information for the detected blind areas; and transmitting, via the communication module, information derived from the processed point cloud data to one or more of the vehicle on-board data processing units of vehicles located in the defined geographical area to provide the view information for the detected blind areas.
  2. 2 . The method according to claim 1 , wherein the step of receiving a first point cloud data from one or more designated RSUs further comprises extracting a dynamic first point cloud data from the first point cloud data, and comparing and/or merging the extracted dynamic first point cloud data with a static point cloud data previously stored in a memory prior to the processing step.
  3. 3 . The method according to claim 1 , further comprising a step of partitioning the processed point cloud data into a plurality of point cloud data partitions or subsections prior to the transmitting step.
  4. 4 . The method according to claim 3 , wherein the transmitting step comprises transmitting information derived from one or more selected point cloud data partitions to one or more of the RSUs and/or the vehicle on-board data processing units located in the defined geographical area.
  5. 5 . The method according to claim 1 , wherein the step of receiving a first point cloud data comprises receiving point cloud data in relation to at least a section of the defined geographical area from the one or more designated RSUs located in the defined geographical area.
  6. 6 . The method according to claim 5 , wherein the at least a section of the defined geographical area comprises a plurality of non-overlapping subsections of the defined geographical area.
  7. 7 . The method according to claim 1 , wherein the step of receiving a second point cloud data comprises receiving point cloud data in relation to at least a section of the defined geographical area from the one or more designated vehicle on-board data processing units of vehicles located in the defined geographical area.
  8. 8 . The method according to claim 7 , wherein the at least a section of the defined geographical area comprises a plurality of non-overlapping subsections of the defined geographical area.
  9. 9 . The method according to claim 7 , wherein the vehicle on-board data processing units of vehicles located in the defined geographical area are designated based on rate of data transmission and/or distance from the communication module.
  10. 10 . The method according to claim 1 , wherein the information derived from the processed point cloud data transmitted via the transmitting step to the one or more vehicle on-board data processing units comprises information relating to one or more blind areas of the one or more RSUs and/or the vehicle on-board data processing units.
  11. 11 . A Vehicle-to-Everything (V2X) system for detecting blind areas for vehicles in a traffic environment of a defined geographical area, the system comprising: a communication module of the V2X system configured to receive a first point cloud data from one or more designated road-side units (RSUs) located in the defined geographical area; and a processing module of the V2X system configured to process at least the received first point cloud data to identify any potential blind areas in the traffic environment; wherein the communication module is configured to receive, in response to a request from the communication module to one or more designated vehicle on-board data processing units of vehicles located in the defined geographical area, a second point cloud data from said one or more designated vehicle on-board data processing units; and wherein the processing module is configured to process the received first point cloud data and second point cloud data by comparing said data to generate merged data comprising a processed point cloud data representing a global view of the traffic environment in the defined geographical area including view information for the detected blind areas; wherein the communication module is configured to transmit information derived from the processed point cloud data to one or more of the vehicle on-board data processing units of vehicles located in the defined geographical area to provide the view information for the detected blind areas.
  12. 12 . The system according to claim 11 , wherein the communication module is configured to receive and transmit point cloud data and information derived from point cloud data from and to one or more of the RSUs and/or the vehicle on-board data processing units of vehicles located in the defined geographical area via one or more cellular vehicle-to-every (C-V2X) communication networks.
  13. 13 . The system according to claim 11 , wherein the receiving module is configured to extract a dynamic first point cloud data from the first point cloud data, and compare and/or merge the extracted dynamic first point cloud data with a static point cloud data previously stored in a memory prior processing by the processing module.
  14. 14 . The system according to claim 11 , wherein the processing module is configured to partition the processed point cloud data into a plurality of point cloud data partitions, and to assign one or more point cloud data partitions for transmission.
  15. 15 . The system according to claim 14 , wherein the plurality of point cloud data partitions correspond to a plurality of non-overlapping subsections of the defined geographical area.
  16. 16 . The system according to claim 15 , wherein the plurality of non-overlapping subsections correlate to one or more blind areas of the one or more RSUs and/or the vehicle on-board data processing units of vehicles located in the defined geographical area.

Description

FIELD OF THE INVENTION The invention relates to a system and a method for road monitoring and, more particularly, but not exclusively, to a system and a method for detecting blind areas of vehicles and/or other monitoring agents at a road. BACKGROUND OF THE INVENTION Connected and Autonomous Vehicles (CAVs) are vehicles configured with an aim to assist or replace human drivers by automating at least some of the driving tasks. In contrast to conventional vehicles which have typically been configured to use only real-time data retrieved from on-board vehicle modules such as visual sensors to determine or detect potential threats on the road, CA Vs utilize the Vehicle-to-Everything (V2X) communications protocol, which is a vehicular communication protocol configured to deliver information from a vehicle to any entity that may affect the vehicle, and vice versa. The V2X protocol assists by enabling communicating information and/or data exchanges between vehicle on-board data processing units and, for example, roadside infrastructure for road safety, management, and/or threat determination purposes. A V2X system incorporates other more specific types of communications including, but not limited to, Vehicle-to-Infrastructure (V2I), Vehicle-to-Vehicle (V2V), Vehicle-to-Pedestrian (V2P), Vehicle-to-Device (V2D), and Vehicle-to-Grid (V2G), etc. Very often, CAVs are equipped with sensors for collecting point clouds; a point cloud comprising a collection of individual data points in a three-dimensional space, with each data point being assigned with a set of coordinates within the X, Y, and Z axes. Point cloud data is particularly useful in object detection, path planning and vehicle control by the CAVs. Similarly, road-side units (RSUs) such as road-side sensors or other connectivity devices may also be capable of collecting point clouds, and subsequently, transmitting the point cloud data to the edge servers through Cellular-Vehicle-to-Everything (C-V2X) channels. Conventional visual sensors mounted on a vehicle are known to suffer from a limited field of view due to the size and height of the vehicle and also the presence of other objects near to and around the vehicle on the road. Among the various V2X networks, V2V cooperative perception requires each vehicle to transmit their precise location, which is sometimes challenging due to common environmental constraints such as, when the vehicles are located in dense urban areas. Furthermore, V2V communication networks such as Dedicated Short Range Communication (DSRC) and Long-Term Evolution-direct (LTE-direct) networks which offer bandwidths of around 10 Mbps will not be suitable to support sharing of raw point cloud data directly from sensors among multiple agents such as CAVs and RSUs. Sharing processed data instead of raw point cloud data is often undesirable as it would neglect valuable information that is extractable from point clouds, which would be essential for robust localization of vehicles and, furthermore, may introduce latency to the delivery of the processed data. CN114173307A discloses an optimization method based on a roadside perception fusion system. The method comprises the steps of taking vehicle-end positioning (vehicle-end high-precision GNSS, inertial navigation or high-precision map combined positioning) as a positioning truth value, and transmitting the positioning truth value to a roadside unit based on a vehicle-road cooperation mode; and determining the sensing accuracy of the roadside sensing equipment. CN114332494A discloses a three-dimensional target detection and identification method based on multi-source fusion by a vehicle. Different environment information are captured through different roadside equipment sensors, with multi-modal features being extracted and then transmitted to a roadside feature fusion center. The roadside feature fusion center then fuses the obtained multi-path multi-modal features into multi-source fusion features for vehicle to perform target identification and detection. CN112767475A discloses an intelligent roadside sensing system based on C-V2X, radar and vision. For visual target detection and radar multi-target tracking, a lightweight target detection neural network model and a weighted neighborhood data association multi-target tracking algorithm based on unscented Kalman filtering are applied. A multi-sensor fusion time synchronization method based on an interpolation and extrapolation method is designed to synchronize data collected from different sensors, and then in combination with C-V2X communication, a fusion result is corrected and compensated through vehicle-road cooperation data. US2021225171A1 discloses a vehicle-to-infrastructure cooperation information processing method comprising generating first on-board perception data including data of an obstacle around the target vehicle sensed by the target vehicle; generating virtual obstacle data for representing the target vehicle according to posi