CN-122023513-A - Object position detection method, apparatus, computer device, readable storage medium, and program product
Abstract
The present application relates to an object position detection method, apparatus, computer device, readable storage medium and program product. The method comprises the steps of carrying out edge point detection on an object to be detected in a target image to obtain an edge point set, inquiring a matched geometric position relation in a pre-constructed template edge point feature table according to an edge orientation angle corresponding to the edge point for any edge point of the edge point set, determining candidate center positions according to the geometric position relation and sub-pixel level coordinate positions of the edge point, carrying out weighted voting on the candidate center positions according to edge confidence corresponding to the edge point to generate voting results corresponding to the edge point, fusing voting results corresponding to all the edge points in the edge point set to obtain a center position voting heat map, and outputting center point position information of the object to be detected according to the sub-pixel level coordinate positions of peak points in the center position voting heat map in the target image. By adopting the method, the object position detection precision can be improved.
Inventors
- WU HUAN
Assignees
- 青岛聚看云科技有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20251229
Claims (10)
- 1. A method of object position detection, the method comprising: Obtaining a target image, and detecting edge points of an object to be detected in the target image to obtain an edge point set, and the coordinate position, the edge orientation angle and the edge confidence of each edge point in the edge point set in the target image; For any edge point of the edge point set, according to an edge orientation angle corresponding to the edge point, a geometric position relation matched with the edge point is inquired in a pre-constructed template edge point feature table, and according to the matched geometric position relation and a coordinate position corresponding to the edge point, a candidate center position of the object to be detected in the target image is determined; according to the edge confidence coefficient corresponding to the edge point, carrying out weighted voting on the candidate center position corresponding to the edge point to generate a voting result corresponding to the edge point, wherein the voting result represents probability information that the candidate center position is the true center position of the object to be detected; And merging voting results corresponding to the edge points in the edge point set to obtain a central position voting heat map, and outputting central point position information of the object to be detected according to the coordinate positions of peak points in the central position voting heat map.
- 2. The method according to claim 1, wherein in the template edge point feature table, each geometric position relation further has a corresponding feature weight coefficient, the feature weight coefficient is used for representing the importance degree of the corresponding geometric position relation, the weighting voting is performed on the candidate center position corresponding to the edge point according to the edge confidence coefficient corresponding to the edge point, and the voting result corresponding to the edge point is generated, and the method includes: Acquiring a feature weight coefficient corresponding to the matched geometric position relation from the template edge point feature table; determining voting weight corresponding to the edge point according to the characteristic weight coefficient and the edge confidence coefficient corresponding to the edge point; And voting the candidate center positions corresponding to the edge points in a voting accumulator according to the voting weights corresponding to the edge points in a sub-pixel mode, and obtaining voting results corresponding to the edge points.
- 3. The method according to claim 2, wherein determining the voting weight corresponding to the edge point according to the feature weight coefficient and the edge confidence corresponding to the edge point comprises: The method comprises the steps of obtaining matching deviation information, wherein the matching deviation information comprises deviation between the geometric position relation between an edge point and a candidate center position and the matching geometric position relation and/or deviation between an edge orientation angle corresponding to the edge point and an edge orientation angle corresponding to a target edge point, and the target edge point is the template edge point matched with the edge point; mapping the matching deviation information into a weight attenuation coefficient through a preset attenuation function; And obtaining the product among the characteristic weight coefficient, the edge confidence coefficient corresponding to the edge point and the weight attenuation coefficient to obtain the voting weight corresponding to the edge point.
- 4. The method of claim 1, wherein after the step of fusing the voting results corresponding to each of the edge points in the set of edge points to obtain a central location voting heat map, the method further comprises: Performing preset heat map optimization operation on the central position voting heat map to obtain an optimized central position voting heat map, wherein the heat map optimization operation comprises at least one of local Gaussian smoothing operation or normalization operation; and identifying the peak point from the optimized central position voting heat map.
- 5. The method of claim 1, wherein the geometric positional relationship comprises a radial distance, the method further comprising: Acquiring a template image, wherein the template image is an image comprising the template object; Regarding a template object in the template image, taking the template center point as a primary center, determining a plurality of sampling angles of the contour edge of the template object according to a preset sampling angle step length, and calculating a radial distance set from the template edge point to the template center point on each sampling angle; Clustering the radial distance set to obtain an edge orientation angle of the template edge point and a radial distance from the template edge point to a template center point so as to construct the template edge point feature table.
- 6. The method of claim 1, wherein the acquiring the target image comprises: acquiring an original image obtained by shooting the object to be detected; Performing preset image optimization operation on the original image to obtain the target image; Wherein the image optimization operation includes at least one of an image denoising operation, a contrast enhancement operation, or a gamma correction operation.
- 7. An object position detection apparatus, characterized in that the apparatus comprises: The device comprises an acquisition module, a detection module and a detection module, wherein the acquisition module is used for acquiring a target image, detecting edge points of an object to be detected in the target image to obtain an edge point set, and the coordinate position, the edge orientation angle and the edge confidence of each edge point in the edge point set in the target image; The back-pushing module is used for inquiring a geometric position relation matched with the edge point in a pre-constructed template edge point characteristic table according to an edge orientation angle corresponding to the edge point for any edge point of the edge point set, and determining a candidate center position of the object to be detected in the target image according to the matched geometric position relation and a coordinate position corresponding to the edge point; the template edge point feature table records the mapping relation between the edge orientation angle of the template edge point and the geometric position relation between the template edge point and the template center point, wherein the template edge point is a plurality of edge points of the template object of the object to be detected, and the template center point is the center point of the template object; The voting module is used for carrying out weighted voting on the candidate center positions corresponding to the edge points according to the edge confidence degrees corresponding to the edge points to generate voting results corresponding to the edge points, wherein the voting results represent probability information that the candidate center positions are the true center positions of the objects to be detected; The positioning module is used for fusing voting results corresponding to the edge points in the edge point set to obtain a central position voting heat map, and outputting central point position information of the object to be detected according to the coordinate positions of peak points in the central position voting heat map.
- 8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
- 9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
- 10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
Description
Object position detection method, apparatus, computer device, readable storage medium, and program product Technical Field The present application relates to the field of computer vision and industrial detection technology, and in particular, to an object position detection method, an object position detection device, a computer apparatus, a computer readable storage medium, and a computer program product. Background In the fields of industrial automation, intelligent manufacturing and the like, accurate detection of the position of an object to be detected (such as an electronic component such as a capacitor, a resistor and the like) is a core premise of automatic actions such as subsequent assembly, sorting and the like. However, in the existing object position detection method, positioning is often realized by comparing a target image with a preset template, and the method has low tolerance to local deformation of an object to be detected, and is easy to interfere due to the local deformation of the object to be detected, so that accurate positioning of the position of the object to be detected cannot be realized. Therefore, there is a problem in the conventional art that the position detection accuracy of the object to be detected is not high. Disclosure of Invention In view of the foregoing, it is desirable to provide an object position detection method, apparatus, computer device, computer-readable storage medium, and computer program product that are capable of improving the position detection accuracy of an object to be detected. In a first aspect, the present application provides a method for detecting the position of an object, the method comprising: Obtaining a target image, and detecting edge points of an object to be detected in the target image to obtain an edge point set, and the coordinate position, the edge orientation angle and the edge confidence of each edge point in the edge point set in the target image; For any edge point of the edge point set, according to the edge orientation angle corresponding to the edge point, a geometric position relation matched with the edge point is inquired in a pre-constructed template edge point feature table, and according to the matched geometric position relation and the coordinate position corresponding to the edge point, the candidate center position of the object to be detected in the target image is determined; according to the edge confidence coefficient corresponding to the edge point, carrying out weighted voting on the candidate center position corresponding to the edge point to generate a voting result corresponding to the edge point, wherein the voting result represents probability information that the candidate center position is the true center position of the object to be detected; And merging voting results corresponding to the edge points in the edge point set to obtain a central position voting heat map, and outputting central point position information of the object to be detected according to the coordinate positions of peak points in the central position voting heat map. The technical scheme has the advantages that the edge point set is obtained by acquiring the target image and detecting the edge points of the object to be detected in the target image, and the coordinate position, the edge orientation angle and the edge confidence of each edge point in the edge point set in the target image are obtained; for any edge point of an edge point set, inquiring a geometric position relation matched with the edge point in a pre-constructed template edge point feature table according to an edge orientation angle corresponding to the edge point, determining candidate center positions of objects to be detected in a target image according to the matched geometric position relation and coordinate positions corresponding to the edge point, recording a mapping relation between the edge orientation angle of the template edge point and the geometric position relation from the template edge point to the template center point in the template edge point feature table, wherein the template edge point is a plurality of edge points of the template object of the objects to be detected, the template center point is the center point of the template object, weighting voting is carried out on candidate center positions corresponding to the edge point according to the edge confidence coefficient corresponding to the edge point to generate a voting result corresponding to the edge point, the voting result represents probability information of the candidate center positions being the actual center positions of the objects to be detected, finally, obtaining a central position voting heat map according to the coordinate positions of peak points in the central position voting heat map, outputting the central point position information of the objects to be detected, and thus enabling the template to be deformed under the condition that the single edge point is the co