Search

EP-3979196-B1 - IMAGE PROCESSING METHOD AND APPARATUS FOR TARGET DETECTION

EP3979196B1EP 3979196 B1EP3979196 B1EP 3979196B1EP-3979196-B1

Inventors

  • MIYAHARA, SHUNJI

Dates

Publication Date
20260506
Application Date
20200723

Claims (12)

  1. A computer-implemented image processing method for target detection, comprising: determining an analysis region, capable of covering the target, in an image captured by a camera on a vehicle, and preprocessing the image in the analysis region to obtain an edge point set for the target; performing, if there is an edge point connection or overlap between a peripheral edge line of the target and a peripheral edge line of an object other than the target in the same edge point set, the following edge point separation processing: selecting edge points from the edge point set to form a first class and a second class respectively, wherein the first class comprises edge points formed as a primary set, the second class comprises other edge points other than the primary set in the edge point set, and a height of the edge points in the primary set in the image is less than a height of the target; performing linear regression processing on the edge points in the primary set to obtain a corresponding linear regression line; selecting an edge point from the second class one by one, and calculating a deviation of the selected edge point with respect to the linear regression line; adding, if the deviation is less than a preset standard deviation, the selected edge point to the primary set; and repeating above processing until the deviation is greater than or equal to the preset standard deviation, so as to form a final primary set; and creating the target on the basis of the edge points in the final primary set for target detection.
  2. The image processing method for target detection according to claim 1, characterized in that selecting edge points from the edge point set to form the first class comprises: selecting, starting from a bottom position of the edge point set, edge points until reach a set height position sequentially in an order of increasing in height to form the first class, wherein the selected edge points need to meet a condition that an initial height is less than the height of the target.
  3. The image processing method for target detection according to claim 1, characterized in that an initial height of the edge points in the first class in the image is two thirds of the height of the target.
  4. The image processing method for target detection according to claim 1, characterized in that selecting the edge point from the second class one by one comprises: selecting, starting from an edge point at a lowest height position in the second class, the edge point one by one for deviation calculation.
  5. The image processing method for target detection according to claim 1, characterized by further comprising: forming edge points that are not selected and edge points with deviations greater than or equal to the preset standard deviation in the second class as a secondary set; and discarding the secondary set.
  6. An image processing apparatus for target detection, comprising: an image preprocessing module, configured to determine an analysis region, capable of covering the target, in an image captured by a camera on a vehicle, and preprocess the image in the analysis region to obtain an edge point set for the target; an edge point separation module, configured to perform, if there is an edge point connection or overlap between a peripheral edge line of the target and a peripheral edge line of an object other than the target in the same edge point set, the following edge point separation processing: selecting edge points from the edge point set to form a first class and a second class respectively, wherein the first class comprises edge points formed as a primary set, the second class comprises edge points other than the primary set in the edge point set, and a height of the edge points in the primary set in the image is less than a height of the target; performing linear regression processing on the edge points in the primary set to obtain a corresponding linear regression line; selecting an edge point from the second class one by one, and calculating a deviation of the selected edge point with respect to the linear regression line; adding, if the deviation is less than a preset standard deviation, the selected edge point to the primary set; and repeating above processing until the deviation is greater than or equal to the preset standard deviation, so as to form a final primary set; and a target creating module, configured to create the target on the basis of the edge points in the final primary set for target detection.
  7. The image processing apparatus for target detection according to claim 6, characterized in that selecting, by the edge point separation module, edge points from the edge point set to form the first class comprises: selecting, starting from a bottom position of the edge point set, edge points until reach a set height position sequentially in an order of increasing in height to form the first class, wherein the selected edge points need to meet a condition that an initial height is less than the height of the target.
  8. The image processing apparatus for target detection according to claim 6, characterized in that an initial height of the edge points in the first class in the image is two thirds of the height of the target.
  9. The image processing apparatus for target detection according to claim 6, characterized in that selecting, by the edge point separation module, the edge point from the second class one by one comprises: selecting, starting from an edge point at a lowest height position in the second class, the edge point one by one for deviation calculation.
  10. The image processing apparatus for target detection according to claim 6, characterized in that the edge point separation module is further configured to: form edge points that are not selected and edge points with deviations greater than or equal to the preset standard deviation in the second class as a secondary set; and discard the secondary set.
  11. A machine-readable storage medium having instructions stored thereon, the instructions being configured to make a machine execute the image processing method for target detection according to any one of claims 1 to 5.
  12. A processor configured to execute the instructions stored in the machine-readable storage medium according to claim 11.

Description

Technical Field The present invention relates to the technical field of intelligent transportation and image processing, in particular to an image processing method and apparatus for target detection. Background Art At present, vehicles with an autonomous driving (AD) function or advanced driver assistance system (ADAS) have begun to be gradually introduced to the market, which has greatly promoted the development of intelligent transportation. In the prior art, sensors supporting the AD/ADAS mainly include a radar, a vision camera system (hereinafter also referred to as a camera), a laser radar, an ultrasonic sensor, etc., among which the vision camera system is the most widely used because it can obtain the same two-dimensional image information as human vision, and its typical applications include lane detection, object detection, vehicle detection, pedestrian detection, cyclist detection and other designated target detection. After the camera captures an image, processing such as edge extraction is performed on the image to extract object and environmental information in the captured image. However, when the camera is used for target detection, due to limitation of performance of the current camera, the camera is very sensitive to an edge connection between a target and an unwanted object in the background, and detection errors are often caused by the edge connection or overlap between the object and the target. As shown in Fig. 1, a target A is a small target (such as a traffic cone) that needs to be detected, an object B is an object that does not need to be detected in the background, but edges of the target A and the object B are connected or overlapped (for example, the part circled by the dotted line), and the two may be very close in chroma, so it is difficult to distinguish the target A from the object B in the image captured by the camera, resulting in inaccurate recognition of the target A. In addition, in target detection, the target A or the object B is also likely to be a self-shadowing object, such as the sun, and correspondingly, the edge of the object B or the target A may disappear in the shadow, thereby further exacerbating the difficulty to separate the edge of the target A from the edge of the object B. In the prior art, there is a solution for distinguishing an object from a target by using content of the object as well as external textures, colors, etc., but the solution is very complicated and obviously not suitable for objects and targets having similar features such as textures and colors. Therefore, there is currently no effective method to separate the edge between the target and the undesired object in the image. YONG HUANG ET AL: "Real-time traffic cone detection for autonomous vehicle", 34TH CHINESE CONTROL CONFERENCE (CCC), TECHNICAL COMMITTEE ON CONTROL THEORY, CHINESE ASSOCIATION OF AUTOMATION, 28 July 2015, pages 3718-3722, DOI: 10.1109/CHICC.2015.7260215, teaches an edge-based approach for traffic cone detection including line fitting. Summary of the Invention In view of this, the present invention aims to provide an image processing method for target detection, so as to at least partially solve the technical problems. In order to achieve the objective, the technical solution of the present invention is implemented as set out in the appended claims. Roughly speaking: an image processing method for target detection includes: determining an analysis region, capable of covering a target, in an image captured by a camera on a vehicle, and preprocessing the image in the analysis region to obtain an edge point set for the target;performing, if there is an edge point connection or overlap between a peripheral edge line of the target and a peripheral edge line of an object other than the target in the same edge point set, the following edge point separation processing: selecting edge points from the edge point set to form a first class and a second class respectively, wherein the first class includes edge points formed as a primary set, the second class includes edge points other than the primary set in the edge point set, and a height of the edge points in the primary set in the image is less than a height of the target; performing linear regression processing on the edge points in the primary set to obtain a corresponding linear regression line; selecting an edge point from the second class one by one, and calculating a deviation of the selected edge point with respect to the linear regression line; adding, if the deviation is less than a preset standard deviation, the selected edge point to the primary set; and repeating above processing until the deviation is greater than or equal to the preset standard deviation, so as to form a final primary set; andcreating the target on the basis of the edge points in the final primary set for target detection. Further, the selecting the edge points from the edge point set to form the first class includes: selecting, starting from a bo