Search

CN-117121062-B - Method and system for estimating depth information

CN117121062BCN 117121062 BCN117121062 BCN 117121062BCN-117121062-B

Abstract

The invention relates to a method for determining depth information about image information by means of an artificial neural network (2) in a vehicle (1), comprising the steps of providing at least one transmitter (3, 3 ') and at least one first and one second receiving sensor (4, 5), the first and second receiving sensor (4, 5) being arranged (810) spaced apart from each other; -emitting electromagnetic radiation (811) by means of the emitter (3, 3 '), receiving by means of the first and second receiving sensors (4, 5) a reflected component of the electromagnetic radiation emitted by the emitter (3, 3 ') and generating by means of the first receiving sensor (4) first image information (B1) and second image information (B2) by means of the second receiving sensor (5) (S12), comparing the first and second image information (B1, B2) to determine at least one unevenly illuminated image area (D1, D2) of the first and second image information, which is generated by parallax due to the spaced arrangement of the receiving sensors (4, 5), analyzing the geometrical information of the at least one unevenly illuminated image area (D1, D2) and based on the analysis result of the geometrical information of the at least one unevenly illuminated image area, depth information is estimated by an artificial neural network (2) (S14).

Inventors

  • S. Heinrich
  • D. Krecker
  • T. Fisher
  • H.G. Kurz

Assignees

  • 大陆自动驾驶德国有限公司
  • 大众汽车股份有限公司

Dates

Publication Date
20260512
Application Date
20220324
Priority Date
20210329

Claims (15)

  1. 1. A method of determining depth information about image information in a vehicle (1) by means of an artificial neural network (2), the method comprising the steps of: -providing at least one emitter (3, 3', 6') and at least one first and one second receiving sensor (4, 5, 7 '), wherein the first and second receiving sensors (4, 5, 7') are arranged in a spaced apart manner from each other; -emitting electromagnetic radiation by means of said emitters (3, 3', 6'); -receiving by said first and second receiving sensors (4, 5,7 ') a reflected component of the electromagnetic radiation emitted by said emitter (3, 3', 6 ') and generating by the first receiving sensor (4, 7) first image information (B1) and by the second receiving sensor (5, 7') second image information (B2) based on the received reflected component; -comparing the first and second image information (B1, B2) to determine at least one unevenly illuminated image area (D1, D2) of the first and second image information having a luminance difference, the at least one unevenly illuminated image area having a luminance difference being generated by parallax based on a spacing layout of receiving sensors (4, 5, 7'); -analyzing the geometrical information of the at least one unevenly illuminated image area (D1, D2) with a luminance difference and estimating depth information by means of the artificial neural network (2) based on the result of the analysis of the geometrical information of the at least one unevenly illuminated image area with a luminance difference.
  2. 2. Method according to claim 1, characterized in that the unevenly illuminated image areas (D1, D2) with brightness differences are generated in a transition area between a first object (O1) and a second object (O2), which are at different distances from the first and second receiving sensors (4, 5, 7'), and that the estimated depth information is depth difference information, which contains information about the difference in distance between the first and second objects (O1, O2) and the vehicle (1).
  3. 3. The method according to claim 1 or 2, characterized in that the emitter (3, 3') is at least one headlight emitting visible light in the wavelength range between 380 nm and 800 nm, and the first and second receiving sensor (4, 5) are each cameras.
  4. 4. A method according to claim 3, characterized in that the first and second receiving sensors (4, 5) form a stereo camera system.
  5. 5. Method according to claim 1, characterized in that at least two transmitters (3, 3 ') in the form of headlamps of the vehicle (1) are provided and that the receiving sensors (4, 5) correspond to the headlamps (3, 3'), respectively, such that the line of sight between the object to be detected (O1, O2) and the headlamps is parallel to the line of sight between the object to be detected (O1, O2) and the receiving sensor (4, 5) to which the headlamps correspond.
  6. 6. Method according to claim 1, characterized in that the first and second receiving sensors (4, 5) are integrated into a headlight of the vehicle (1).
  7. 7. Method according to claim 1, characterized in that the artificial neural network (2) estimates the depth from the width (b) of the unevenly illuminated image area (D1, D2) with brightness differences measured in the horizontal direction.
  8. 8. The method according to claim 1, characterized in that the artificial neural network (2) determines depth information in the image area detected by the first and second receiving sensors (4, 5,7 ') based on triangulation between image points in the first and second image information (B1, B2) and the first and second receiving sensors (4, 5, 7').
  9. 9. Method according to claim 8, characterized in that the neural network (2) compares depth information determined by triangulation with estimated depth information obtained by analyzing geometrical information of the at least one unevenly illuminated image area (D1, D2) with a luminance difference and generates adapted depth information depending on the comparison.
  10. 10. Method according to claim 8 or 9, characterized in that the artificial neural network (2) adapts depth information obtained by triangulation based on an analysis of geometrical information of the at least one unevenly illuminated image area (D1, D2) with brightness differences.
  11. 11. Method according to claim 1, characterized in that infrared radiation, radar signals or laser radiation is emitted by at least one emitter (6, 6').
  12. 12. Method according to claim 11, characterized in that at least a part of the receiving sensor (7, 7') is an infrared camera, a radar receiver or a receiver for laser radiation.
  13. 13. Method according to claim 1, characterized in that for estimating depth information about image information representing a side area of the vehicle (1) and/or a rear area of the vehicle (1), more than one transmitter (3, 3', 6 ') and more than two receiving sensors (4, 5, 7 ') are used for determining the image information, wherein a plurality of sensor groups (S1, S2, S3, S4) are provided, each having at least one transmitter and at least two receiving sensors, and wherein the image information of the respective sensor groups (S1, S2, S3, S4) is combined into overall image information.
  14. 14. The method according to claim 13, characterized in that the sensor group (S1, S2, S3, S4) utilizes at least partly electromagnetic radiation of different frequency bands.
  15. 15. A system for determining depth information about image information in a vehicle (1), the system comprising a computing unit (8) performing a computing operation of an artificial neural network (2), at least one transmitter (3, 3', 6 ') configured to transmit electromagnetic radiation, and at least one first and second receiving sensor (4, 5, 7 ') arranged in a spaced apart manner from each other, wherein the first and second receiving sensors (4, 5, 7 ') are configured to receive reflected components of the electromagnetic radiation transmitted by the transmitter (3, 3', 6 '), and wherein the first receiving sensor (4, 7) is configured to generate first image information (B1) from the received reflected components, and the second receiving sensor (5, 7 ') is configured to generate second image information (B2) from the received reflected components, wherein the artificial neural network (2) is configured to: -comparing the first and second image information (B1, B2) to determine at least one unevenly illuminated image area (D1, D2) of the first and second image information having a brightness difference, wherein the unevenly illuminated image area (D1, D2) having a brightness difference is generated by parallax due to the spaced arrangement of the receiving sensors (4, 5, 7'); -analyzing the geometrical information of the at least one unevenly illuminated image area (D1, D2) with a luminance difference and estimating depth information based on the result of the analysis of the geometrical information of the at least one unevenly illuminated image area (D1, D2) with a luminance difference.

Description

Method and system for estimating depth information Technical Field The invention relates to a method and a system for determining depth information about image information provided by an imaging sensor of a vehicle by means of an artificial neural network. Background It is known in principle to detect the surroundings of a vehicle in three dimensions by means of imaging sensors. In addition, stereo cameras may also be used for 3D environment detection. For calculating the distance information, the image information provided by the two cameras is correlated and the distance of the image point from the vehicle is determined by means of triangulation. A camera for a stereo camera system is for example integrated into the front region of the vehicle. In this case, the installation location is typically a windshield area or a radiator grille. In order to generate sufficient brightness at night for image analysis, a head lamp of a vehicle is generally used. A problem with current 3D environment detection is that unevenly illuminated image areas in the image information obtained by the cameras of the stereoscopic camera system make the determination of depth information more difficult, because in these unevenly illuminated areas distance information cannot be obtained by the stereoscopic camera system. This is particularly applicable if shadows due to parallax are generated between the head lamp and the camera due to different mounting positions. Disclosure of Invention In view of the above, it is an object of the present invention to provide a method of determining depth information regarding image information, which method enables an improved determination of depth information. According to a first aspect, the invention relates to a method for determining depth information about image information in a vehicle by means of an artificial neural network. The neural network is preferably a Convolutional Neural Network (CNN). The method comprises the following steps: First, at least one transmitter and at least one first and second receiving sensor are provided. The emitter may be adapted to emit electromagnetic radiation in the human visible spectrum. Alternatively, the emitter may emit electromagnetic radiation (the emitter being a radar emitter) or laser radiation (the emitter being a lidar emitter) in the infrared spectral range at a frequency range of about 24GHz or about 77 GHz. The first and second receiving sensors are arranged in a spaced apart manner from each other. The receiving sensors are adapted to the type of emitter, i.e. the receiving sensors are adapted to receive reflected components of the electromagnetic radiation emitted by at least one emitter. These receiving sensors may be adapted in particular to receive electromagnetic radiation (radar receiver) or laser radiation (lidar receiver) in the visible or infrared spectral range with a frequency range of about 24GHz or about 77 GHz. Subsequently, electromagnetic radiation is emitted by the emitter and reflected components of the electromagnetic radiation emitted by the emitter are received by the first and second receiving sensors. Based on the received reflected component, the first receiving sensor generates first image information and the second receiving sensor generates second image information. The first and second image information are then compared to determine at least one unevenly illuminated image area in the first and second image information, the at least one unevenly illuminated image area being produced by parallax based on the spaced arrangement of receiving sensors. If the first and second receiving sensors are not located in the center of projection of the emitters, in particular of the headlight, an unevenly illuminated image area may also be produced by parallax between the respective receiving sensor and its corresponding emitter. In other words, therefore, at least one image area that is brighter or darker in the first image information than in the second image information is determined as a "unevenly illuminated image area". The geometric information of the at least one unevenly illuminated image region is then analyzed and depth information is estimated by the artificial neural network based on the analysis result of the geometric information of the at least one unevenly illuminated image region. The size or extent of the unevenly illuminated image region is analyzed, since it is possible to draw conclusions with the aid of the neural network about the three-dimensional design of the object (for example, a specific region of the object is less distant from the vehicle than another region) or the distance between two objects in the environmental region of the vehicle. The technical advantage of the proposed method is that even in unevenly illuminated areas where depth cannot be determined by means of triangulation, conclusions can be drawn by the neural network regarding the spacing of one or more objects in and/or around th