CN-122024201-A - Method and system for detecting objects in blind spots
Abstract
The present disclosure relates to methods and systems for detecting objects in blind spots. The present disclosure relates to a method (200) for object detection in at least one blind spot (10) around a vehicle (20), comprising capturing (202) image data of at least one area around the vehicle (20) at least partly covering the blind spot (10), processing (206) the obtained image data at least partly to locate at least one object in the obtained image data, converting (208) the processed image data at least partly into corresponding circumferential coordinates, the corresponding circumferential coordinates being defined with respect to a viewpoint from which the image data was obtained, converting (210) the converted circumferential coordinates at least partly into corresponding orthogonal coordinates, the corresponding orthogonal coordinates relating to orthogonal viewpoints with respect to the blind spot (10), detecting (212) a movement of at least part of the object with respect to the blind spot (10) based at least on at least part of the converted orthogonal coordinates. The present disclosure also relates to a system (100) for object detection in a blind spot (10) around a vehicle (20).
Inventors
- SALAMEH ADNAN
- R. Korolykov
Assignees
- 萨玛创新有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20251107
- Priority Date
- 20241108
Claims (14)
- 1. A method (200) for object detection in at least one blind spot (10) around a vehicle (20), the method comprising: Capturing (202) image data of at least one region in the surroundings of the vehicle (20) at least partially covering the blind spot (10); processing (206) the obtained image data at least partially to locate at least one object in the obtained image data; converting (208) the processed image data at least partially into corresponding circumferential coordinates, wherein the corresponding circumferential coordinates are defined with respect to a viewpoint from which the image data was obtained; Transforming (210) the transformed circumferential coordinates at least partially into corresponding orthogonal coordinates, wherein the corresponding orthogonal coordinates relate to orthogonal viewpoints with respect to the blind spot (10), and Movement of at least a portion of the object relative to the blind spot (10) is detected (212) based at least on at least a portion of the transformed orthogonal coordinates.
- 2. The method (200) of claim 1, wherein (I) The blind spot is optionally located in front of, sideways of and/or behind the vehicle with respect to the main driving direction and/or the position and/or direction of the cab and/or driver's seat; (ii) The image data is captured by at least one camera (102); (iii) The processing, converting, transforming and/or detecting is at least partly performed by at least one control unit (104), the at least one control unit (104) optionally being arranged at least partly in the vehicle (20) or at least one base station, wherein optionally the image data is obtained by the control unit (104) from a camera (102); (iv) Defining circumferential coordinates with respect to a viewpoint with respect to a position of the camera (102), and/or defining a coordinate origin from the viewpoint, in particular the viewpoint of the camera (102); (v) The object is at least one of a nearby vehicle, a person, an animal, a cyclist, a pedestrian, a wall, a building, an obstacle, a plant, a tree, and/or a road user susceptible to injury, and/or (Vi) The movement of at least said portion of said object comprises at least partly entering and/or exiting a blind spot (10).
- 3. The method (200) according to claim 1 or 2, wherein the step of processing (206) further comprises classifying the object in the obtained image data, wherein optionally the classifying comprises classifying the object, optionally a person, an animal, a cyclist, a wall, an obstacle, a plant, a tree, a road user susceptible to injury, a building, a vehicle, a pedestrian, etc., and/or comprises classifying a risk level, optionally a high risk level or a low risk level, and/or comprises classifying a probability of a potential collision with the object, optionally a high probability or a low probability.
- 4. A method (200) according to claim 2 or 3, wherein The control unit (104) is configured to perform classification and/or localization of objects in the obtained image data using artificial intelligence methods, in particular using deep learning methods and/or neural network models, wherein optionally the neural network model involves a plurality of convolution layers, optionally with Yolo-backbones, RESNET heads, segmentation and semantic segmentation leading to classification and localization.
- 5. The method (200) of one of claims 2 to 4, wherein A control unit (104), in particular when using the artificial intelligence method, is configured to find at least one projection point of one or more elements, optionally the object, optionally at least one person, at least one vehicle, at least one cyclist, at least one animal, at least one wall, at least one tree, at least one obstacle, at least one vulnerable road user, at least one plant, at least one building and/or the like, in the captured image data, wherein optionally at least one calibration is used to assign pixels or groups of pixels of the captured image data to one or more locations of the ground, optionally by projecting points from a top view or a bird's eye view.
- 6. The method (200) of claim 5, wherein In addition to the projection points, the elements and/or the objects are classified, in particular after the elements are identified as objects, in particular by the control unit (104) and/or the artificial intelligence method using at least one additional point characterizing the objects and/or object characteristics, such as size, shape, color, etc., optionally classified as a person, animal, cyclist, wall, vehicle, plant, building, obstacle, vulnerable road user, plant or tree, etc., and optionally the actual and complete object shape and position relative to the vehicle (20) is calculated.
- 7. The method (200) according to any one of the preceding claims, wherein the camera (102) is placed on top of a front or rear windscreen of the vehicle (20) and/or on an a-pillar and/or on a rearview device of the vehicle (20) and/or is optionally adapted to take a photograph of the blind spot (10).
- 8. The method (200) of any of the preceding claims, wherein The step of converting (208) the processed image data into corresponding circumferential coordinates corresponds to a conversion from pixel coordinates to real world coordinates, and/or The step of transforming (210) corresponds to a perspective transformation, optionally providing 2D or 3D real world coordinates of the recording area, optionally with a coordinate origin related to the position of the camera or a specific point of the vehicle (20) or a specific point around the vehicle (20), optionally the specific point around the vehicle (20) being a specific point of the ground.
- 9. The method (200) according to any one of the preceding claims, wherein the control unit (104) has pre-stored coordinates of a predetermined region of interest, in particular in the blind spot (10), wherein optionally the predetermined region of interest is part or all of the blind spot (10), wherein optionally the step of detecting (212) is based on matching at least one of the transformed orthogonal coordinates with the coordinates of the predetermined region of interest in the blind spot (10) to detect the entry of the object into the blind spot (10).
- 10. The method (200) of any of the preceding claims, wherein After detecting the object, the control unit (104) is configured to inform the driver or passenger of the vehicle (20), wherein optionally the notification is a visual notification, an audio notification and/or a tactile notification, and/or the control unit (104) sends a signal and/or even controls at least one driving assistance system and/or a function of the vehicle (20), such as braking or acceleration.
- 11. The method (200) of any of the preceding claims, wherein The control unit (104) has access to pre-stored image data, wherein optionally the control unit (104) has a data storage unit for storing pre-stored image data, wherein optionally the pre-stored image data comprises information about objects, in particular object features, having varying visual properties such as shape, size and color, wherein the properties are used for identifying patterns of elements and/or objects, optionally patterns of pedestrians, plants, vulnerable road users, persons, buildings, obstacles, animals, vehicles, cyclists, walls, trees, etc., wherein optionally the pre-stored image data is used by the control unit (104) and/or an artificial intelligence method for identifying patterns of pre-stored objects in the captured image data, wherein optionally the objects are tracked by using projection points and/or object features, optionally by using cosine similarities of feature vectors, wherein an extrapolation of the tracking history is used for predicting a possible collision with the vehicle (20), in particular a possible collision with the front of the vehicle (20).
- 12. The method (200) of any of the preceding claims, wherein The method (200) is a computer-implemented method having a control unit (104) arranged at a base station, wherein optionally the base station is a centralized location configured to receive data from a plurality of vehicles (20), in particular wirelessly, and/or the base station is at a cloud location.
- 13. A system (100) for object detection in a blind spot (10) around a vehicle (20), the system (100) comprising: A camera (102) configured for capturing image data in the surroundings of the vehicle (20), and Control unit (104) arranged on the vehicle (20) and/or comprised by the vehicle (20) and adapted to implement the method (200) according to any of the preceding claims.
- 14. The system (100) according to claim 11, wherein the control unit (100) is configured for Obtaining captured image data from the camera (102), Processing at least partly the obtained image data to locate at least one object in the obtained image data, Converting the processed image data at least partly into corresponding circumferential coordinates, wherein the corresponding circumferential coordinates relate to a viewpoint in relation to the position of the camera (102), -Transforming the transformed circumferential coordinates at least partially into corresponding orthogonal coordinates, wherein the corresponding orthogonal coordinates relate to orthogonal viewpoints with respect to the blind spot (10), and/or -Detecting at least a part of the object into the blind spot (10) based at least on at least a part of the transformed orthogonal coordinates.
Description
Method and system for detecting objects in blind spots Technical Field The present disclosure relates to a method for detecting objects in blind spots in front of a vehicle by using a camera and a control unit. Furthermore, the present disclosure relates to a system for detecting objects in a blind spot in front of a vehicle, the system comprising a camera and a control unit adapted to implement such a method. Background As the population worldwide increases, the number of vehicles traveling on the road increases. Furthermore, as the population increases, congestion in roads has increased due to traffic. As traffic worldwide increases, the number of road accidents also increases. Various techniques are emerging to reduce the number of incidents. There has been great development in materials for bumpers and dashboards to reduce the impact of collisions. The material of the chassis has also been developed to absorb the impact of the collision. Similarly, there have been developments in the airbag field. In addition, various developments have been made in lighting technology to reduce the number of accidents. Furthermore, there is a development of avoiding a collision by near field communication by notifying a driver of the presence of a vehicle. Similarly, sensors are included in the vehicle to sense any other vehicle in the vicinity or remote. In addition, collision avoidance by inter-vehicle GPS data management has also been developed and studied. Driver monitoring systems have also been developed to avoid road collisions. In addition, driving behavior prediction and estimation systems have been developed to avoid road collisions. For example, US11054835B2 discloses a collision avoidance method using LiDAR (laser radar) data comprising one or more points and determining a speed constraint for each of the one or more points. Similarly, CN105405320a discloses an automotive collision warning system using 3D reconstruction of polarized light images based on feature point extraction based on Harris operator. KR102605696B1 relates to a method of accurately estimating a map-based CCTV camera pose and target coordinates, wherein the method includes estimating a camera pose by accurately matching a map with a road object in an image captured by a CCTV camera, selecting a mapped object in the image, and estimating coordinates of the selected mapped object in the image based on the estimated camera pose, wherein the camera pose includes information about a position, pan, and tilt of the CCTV camera. However, there is a lack of development in the field of detecting objects in front of a vehicle using a camera. The object may be, but is not limited to, a person, a child, a bicycle, and/or an animal. More particularly, there is a lack of development in the field of detecting objects in blind spots in front of a vehicle. The term blind spot may refer to a rectangular area that is not directly visible to the driver in the seated position. Furthermore, depending on the geometry of the respective vehicle, blind spots in front of the vehicle of a particular geometry may be fixed. Furthermore, as the geometry changes, the corresponding blind spot may also change. Disclosure of Invention Accordingly, there is a need to develop a cost-effective, safe and reliable system and method for detecting objects in blind spots in front of a vehicle. It is therefore an object of the present disclosure to provide a method for detecting objects in blind spots in front of a vehicle, to at least partially overcome the known drawbacks of the prior art. Furthermore, it is an object to develop a cost-effective and reliable system for detecting objects in blind spots in front of a vehicle. It is a further object of the present disclosure to provide a cost effective and safe method for detecting objects in blind spots in front of a vehicle. This object is solved by the features of claim 1. Embodiments of the method according to the present disclosure are described in claims 2 to 12. Furthermore, the present disclosure provides a system comprising a camera configured for capturing image data in front of a vehicle, and a control unit arranged on the vehicle and adapted to implement the method as described above. Embodiments of the system according to the disclosure are described in claims 13 and 14. Furthermore, the present disclosure relates to a computer-implemented method for object detection in a blind spot in front of a vehicle, which method is implemented by a control unit located in a base station, wherein the control unit is adapted to implement the method as described above. Accordingly, one aspect of the present disclosure relates to a method of detecting an object in a blind spot in front of a vehicle, which may include capturing image data by a camera, obtaining the captured image data from the camera by a control unit configured on the vehicle, processing the obtained image data by the control unit to locate the object in the obtained im