DE-102024132718-A1 - METHOD AND SYSTEM FOR DETECTING AN OBJECT IN A BLIND SPOT
Abstract
The present disclosure relates to a method (200) for object detection in at least one blind spot (10) in the vicinity of a vehicle (20), wherein the method comprises acquiring (202) image data of an area in the vicinity of the vehicle (20) that at least partially covers the blind spot (10); processing (206) at least partially of the image data obtained in order to locate at least one object in the image data obtained; converting (208) at least partially of the processed image data into corresponding perimeter coordinates, wherein the corresponding perimeter coordinates are defined relative to a viewpoint from which the image data are obtained; and transforming (210) at least partially of the converted perimeter coordinates into corresponding orthogonal coordinates, wherein the corresponding orthogonal coordinates refer to an orthogonal viewpoint with respect to the blind spot (10). and the detection (212) of a movement of at least a part of the object relative to the blind spot (10) based on at least a part of the transformed orthogonal coordinates. The present disclosure also relates to a system (100) for object detection in a blind spot (10) in the vicinity of a vehicle (20).
Inventors
- Alexander Slama
- Rostislav Korolkov
Assignees
- Motherson Innovations Company Limited
Dates
- Publication Date
- 20260513
- Application Date
- 20241108
Claims (14)
- A method (200) for object detection in at least one blind spot (10) in the vicinity of a vehicle (20), comprising: Acquiring (202) image data of at least one area in the vicinity of the vehicle (20) that at least partially covers the blind spot (10); Processing (206) at least partially of the acquired image data to locate at least one object in the acquired image data; Converting (208) the processed image data at least partially into corresponding perimeter coordinates, wherein the corresponding perimeter coordinates are defined relative to a viewpoint from which the image data were acquired; Transforming (210) the transformed perimeter coordinates at least partially into corresponding orthogonal coordinates, wherein the corresponding orthogonal coordinates refer to an orthogonal viewpoint with respect to the blind spot (10); and detection (212) of a movement of at least a part of the object relative to the blind angle (10) based at least partially on at least a part of the transformed orthogonal coordinates.
- The procedure (200) according to Claim 1 , wherein (i) the blind spot is located at the front, side and/or rear of the vehicle, optionally relative to a principal direction of travel and/or a position and/or direction of the driver's cab and/or driver's seat; (ii) the image data is acquired by at least one camera (102); (iii) the processing, conversion, transformation and/or recognition is performed at least partially by at least one control unit (104), optionally located at least partially in the vehicle (20) and/or at at least one base station, wherein the image data is optionally obtained from the camera (102) by the control unit (104); (iv) the perimeter coordinates are defined relative to a viewpoint in relation to the position of the camera (102) and/or the origin of the coordinates is/are defined by the viewpoint, in particular of the camera (102); (v) the object is at least one of the following: a nearby vehicle, a person, an animal, a cyclist, a pedestrian, a wall, a building, an obstacle, a plant, a tree and/or a vulnerable road user; and/or (vi) the movement of at least part of the object involves at least partially entering and/or exiting the blind spot (10).
- The procedure (200) according to Claim 1 or 2 , wherein the processing step (206) also includes the classification of the object in the received image data, the classification optionally including a categorization of objects, optionally a human, an animal, a cyclist, a wall, an obstacle, a plant, a tree, a vulnerable road user, a building, a vehicle, a pedestrian and/or the like, and/or including a categorization of a hazard level, optionally a high hazard level or a low hazard level, and/or including a categorization of a probability of a potential collision with the object, optionally a high or low probability.
- The procedure (200) according to Claim 2 or 3 , wherein the control unit (104) is configured to use an artificial intelligence approach, in particular using a deep learning approach and/or a neural network model, optionally to perform the classification and/or localization of the object in the obtained image data, optionally the neural network model comprising multiple convolutional layers, optionally with Yolo backbone, RESNET head, segmentation and semantic segmentation, leading to classification and localization.
- The procedure (200) according to one of the Claims 2 until 4 , wherein the control unit (104), particularly when using the artificial intelligence approach, is configured to find at least one projection point of one or more elements, optionally of the object, in the captured image data, optionally of at least one person, at least one vehicle, at least one cyclist, at least one animal, at least one wall, at least one tree, at least one obstacle, at least one vulnerable road user, at least one plant, at least one building and/or the like, onto the ground, optionally using at least one calibration to assign pixels or groups of pixels of the captured image data to one or more locations on the ground, optionally by projecting points from a top or bird's-eye view.
- The procedure (200) according to Claim 5 , wherein, in addition to the projection points, at least one further point is used that characterizes the object and/or object features such as size, shape, color or the like, in particular by the control unit (104) and/or the artificial intelligence approach to classify the element and/or the object, in particular after identification of the element as the object, optionally: as a human, as an animal, as a cyclist, as a wall, as a vehicle, as a plant, as a building, as an obstacle, as a vulnerable road user, as a plant or as a tree or the like, and optionally to calculate the real and complete shape and position of the object relative to the vehicle (20).
- The method (200) according to one of the preceding claims, wherein the camera (102) is placed on top of a front windshield or a rear windshield of the vehicle (20) and/or on the A-pillar and/or on a rearview device of the vehicle (20) and/or is optionally adapted to detect the blind spot (10).
- The method (200) according to one of the preceding claims, wherein the step of converting (208) the processed image data into the corresponding circumferential coordinates corresponds to a conversion of pixel coordinates into real-world coordinates, and/or the step of transforming (210) corresponds to a perspective transformation which optionally provides 2D or 3D real-world coordinates of the recorded area, wherein optionally an origin of the coordinates relates to the position of the camera or a specific point of the vehicle (20) or a specific point in the environment of the vehicle (20), optionally a specific point on the ground.
- The method (200) according to one of the preceding claims, wherein the control unit (104) has prestored coordinates of a predetermined area of interest, in particular in the blind spot (10), wherein optionally the predetermined area of interest is part or all of the blind spot (10), wherein optionally the detection step (212) is based on comparing at least one of the transformed orthogonal coordinates with coordinates of the predetermined area of interest in the blind spot (10) in order to detect the entry of the object into the blind spot (10).
- The method (200) according to one of the preceding claims, wherein, after the object is detected, the control unit (104) is configured to notify a driver or passenger of the vehicle (20), wherein the notification is optionally a visual notification, an acoustic notification and/or a haptic notification, and/or the control unit (104) sends a signal and/or even controls at least one driver assistance system and/or a function of the vehicle (20) such as braking or accelerating.
- The method (200) according to one of the preceding claims, wherein the control unit (104) has access to pre-stored image data, wherein optionally the control unit (104) has a data storage unit for storing the pre-stored image data, wherein optionally the pre-stored image data includes information about the objects, in particular the object features, of varied visual attributes such as shape, size, and color, wherein the attributes are used to recognize patterns of the element and/or object, optionally pedestrians, plants, vulnerable road users, people, buildings, obstacles, animals, vehicles, cyclists, walls, trees, and/or the like, wherein optionally the pre-stored image data is used by the control unit (104) and/or the artificial intelligence approach to recognize patterns of pre-stored objects in the acquired image data, wherein optionally the object is tracked by using the projection points and/or the object feature, optionally by using the cosine similarity of feature vectors, wherein an extrapolation of the trajectory is derived from the tracking history. to predict a possible collision with, in particular the front of, the vehicle (20).
- The method (200) according to one of the preceding claims, wherein the method (200) is a computer-implemented method in which the control unit (104) is arranged at the base station, wherein optionally the base station is a central location configured to receive data from a plurality of the vehicles (20), in particular wirelessly, and/or the base station is located at a cloud location.
- A system (100) for object detection in a blind spot (10) in the vicinity of a vehicle (20), wherein the system (100) comprises: a camera (102) configured to capture image data in the vicinity of the vehicle (20); and a control unit (104) arranged on and/or encompassed by the vehicle (20) and adapted to implement a method (200) according to any one of the preceding claims.
- The system (100) according to Claim 11 , wherein the control unit (104) is configured to: - receive the captured image data from the camera (102), - at least partially process the received image data in order to locate at least one object in the received image data, - at least partially convert the processed image data into corresponding perimeter coordinates; wherein the corresponding perimeter coordinates ten refer to a viewpoint with respect to a position of the camera (102), - at least partially transforming the converted circumferential coordinates into corresponding orthogonal coordinates, wherein the corresponding orthogonal coordinates refer to an orthogonal viewpoint with respect to the blind angle (10), and/or - detecting the entry of at least a part of the object into the blind angle (10) based at least on at least a part of the transformed orthogonal coordinates.
Description
The present disclosure relates to a method for object detection in the blind spot in front of a vehicle using a camera and a control unit. Furthermore, the present disclosure relates to a system for object detection in the blind spot in front of a vehicle, comprising a camera and a control unit adapted for implementing such a method. With global population growth, the number of vehicles on the road has increased. Furthermore, traffic congestion has also increased with population growth. As traffic has grown worldwide, so has the number of traffic accidents. Various technologies are being developed to reduce the number of accidents. There has been enormous progress in the materials used for bumpers and dashboards to reduce the impact of collisions. Developments have also been made in chassis materials to absorb the impact of collisions. Similarly, there have been developments in the field of airbags. In addition, various advancements have been made in lighting technology to reduce the number of accidents. Furthermore, there are developments aimed at avoiding collisions through near-field communication, which informs the driver of the presence of another vehicle. Similarly, sensors have been integrated into vehicles to detect other vehicles nearby or in the vicinity. Additionally, collision avoidance through inter-vehicle GPS data management has been developed and researched. Driver monitoring systems have also been developed to prevent traffic accidents. Furthermore, systems for predicting and estimating driving behavior have been developed to prevent traffic accidents. For example, it reveals US11054835B2 A collision avoidance method using lidar data, comprising one or more points and determining the speed limit for each point or points. Similarly disclosed CN105405320A an early warning system for vehicle collisions using polarized light image 3D reconstruction based on Harris operator-based feature extraction. KR102605696B1 relates to a method for estimating a map-based CCTV camera position and object coordinates with precision, wherein the method comprises estimating a camera position by precisely matching a map with a road surface object in an image captured by a CCTV camera; selecting a mapped object in the image; and estimating coordinates of the selected mapped object in the image based on the estimated camera position, wherein the camera position includes information about a position, pan, and tilt of the CCTV camera. However, there is a lack of development in the area of object detection in front of a vehicle using a camera. The object can be, but is not limited to, a person, a child, a bicycle, and/or an animal. In particular, there is a lack of development in the area of object detection in the blind spot in front of the vehicle. The term 'blind spot' can refer to a rectangular area that cannot be directly seen by a driver in a seated position. Furthermore, the blind spot in front of a vehicle of a certain geometry can be fixed, depending on the geometry of the respective vehicle. Additionally, the corresponding blind spot can change with a change in the geometry. Therefore, there is a need to develop a cost-effective, safe and reliable system and method for detecting an object in a blind spot in front of a vehicle. Therefore, one objective of the present disclosure is to provide a method for detecting an object in a blind spot in front of a vehicle, in order to at least partially overcome the known disadvantages of the prior art. Furthermore, it is an objective to develop a cost-effective and reliable system for detecting an object in a blind spot in front of a vehicle. A further objective of the present disclosure is to provide a cost-effective and safe method for detecting an object in a blind spot in front of a vehicle. This objective is achieved by the features of claim 1. Embodiments of the method according to the present disclosure are described in claims 2 to 12. Furthermore, the present disclosure provides a system comprising a camera configured to capture image data in front of the vehicle; and a control unit arranged and adapted within the vehicle to implement a method as described above. Embodiments of the system according to the present disclosure are described in claims 13 and 14. Furthermore, the present disclosure relates to a computer-implemented method for object detection in a blind spot in front of a vehicle, wherein the method is based on a A control unit is implemented which is located in a base station, the control unit being adapted to implement a procedure as described above. Accordingly, one aspect of the present disclosure relates to a method for detecting an object in a blind spot in front of a vehicle, wherein the method may comprise: the acquisition of image data by a camera; the receipt of the acquired image data from the camera by a control unit; the processing of the received image data by the control unit to locate the object in the received image data; the conver