CN-121810739-B - Target tracking and object identification method based on angle interval
Abstract
The invention discloses a target tracking and object identification method based on angle intervals, which is characterized in that objects in a picture shot by a camera are identified based on yolo models, the types of the objects and the coordinates of the objects in the shot picture are recorded, the angle intervals of the identified objects are calculated, the characteristic points of each object stored in each angle interval are extracted and recorded, after a new picture of a frame is acquired, the recorded result of the picture of the new frame is compared with the previous frame, the objects which are the same in types and area differences and have the distances between the characteristic points within a preset distance threshold range appear in the adjacent angle intervals are judged to be the same object, and the motion trail of the object is updated. According to the scheme, the angle interval is used for identifying the object and judging whether the motion trail belongs to the same object, occupied resources are fewer, the running speed is faster, and the method is more suitable for scenes with limited calculation force and requirements on real-time performance.
Inventors
- ZHAI CHUNYU
- YANG GUOKUAN
- MAO HULIN
- LI YANG
- FU MING
- LIU WEI
- WANG BINGQUAN
- LI ENLU
- ZOU KUN
- Zhai yao
Assignees
- 成都四为电子信息股份有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260304
Claims (6)
- 1. The target tracking and object identifying method based on the angle interval is characterized by comprising the following steps: Step S1, recognizing an object in a shooting picture of a camera based on yolo models, and recording the type of the object and the coordinates of the object in the shooting picture; Step S2, calculating angle intervals of the identified objects, recording one or more types of objects in each angle interval, and recording only one object with the largest area for a single angle interval in which a plurality of objects of the same type appear, wherein the calculation of the angle intervals comprises the following steps: Setting a reference angle of a picture acquired by a camera, wherein the leftmost side is set to be 0 degrees, the middle is set to be 90 degrees, and the rightmost side is set to be 180 degrees; Selecting a reference camera, namely, when only one camera is used, the camera is the reference camera, when a plurality of cameras are used, the camera in front of the camera is selected as the reference camera, and other cameras are non-reference cameras, wherein in the non-reference cameras, the angle interval of an object is set to be the angle of the object in a shooting picture plus the deflection angle of the camera and the deviation angle of the camera and the shooting picture of the reference camera; setting an angle interval where an object is actually located as an angle where the object is located in a shooting picture plus a deflection angle of the camera for pictures which are acquired by the reference camera subsequently; the angle represents the angle of the object center point in the picture shot by the camera, and the object center point is the center point of the object boundary frame; S3, extracting and recording characteristic points of each object stored in each angle interval; And S4, after a new frame of picture is obtained, repeating the steps S1-S3 to obtain new frame of picture data, comparing the recorded result of the new frame of picture with the previous frame, judging the objects with the same category, area difference value within a preset threshold range and distance between characteristic points within the preset distance threshold range in the adjacent angle interval as the same object, and updating the motion trail of the object.
- 2. The method for tracking and identifying an object based on an angle interval according to claim 1, wherein the step S1 specifically comprises the following sub-steps: s11, preprocessing single-frame image data, loading yolo model by using ncnn deep learning reasoning framework, inputting preprocessed image data and obtaining an output feature map; Step S12, defining the downsampling rate of the multi-scale feature map, covering target detection of different scales, and generating coordinates and corresponding step length of each grid point; s13, decoding boundary frame parameters output by the model, filtering recognition results with confidence coefficient lower than a preset threshold range, obtaining candidate frame data, sorting according to the confidence coefficient, and removing overlapped candidate frames through NMS; and S14, converting the coordinates and length and width of the candidate frames after screening from a model input space to an original image space for coordinate space conversion, and finally generating object boundary frame data.
- 3. A method of angle interval based object tracking and object recognition as defined in claim 2, wherein the preprocessing of the image data includes image size scaling, image format conversion and normalization processing.
- 4. The method for tracking and identifying objects based on angle intervals as claimed in claim 2, wherein the candidate frame data includes object category, confidence level, and coordinates and length and width of the candidate frame.
- 5. The method for tracking and identifying an object based on an angle interval as claimed in claim 1, wherein the step S3 of extracting and recording the feature points includes converting an image in a boundary box of the object into a gray image by using OpenCV image processing, and extracting and recording the feature points in the gray image by running SIFT algorithm.
- 6. The method for tracking and identifying objects according to claim 1, wherein the step S4 further comprises storing the information of each object for a predetermined storage time, deleting the information of each object after the predetermined storage time is exceeded, and marking the object as a new object if the object appears in the screen again.
Description
Target tracking and object identification method based on angle interval Technical Field The invention relates to the technical field of target tracking, in particular to a target tracking and object identification method based on an angle interval. Background The traditional target tracking algorithm basically needs to judge whether a motion track belongs to the same object through IOU matching and Kalman filtering, and when the target tracking algorithm is used on equipment with limited calculation power, the phenomenon of occupying more resources, slow running speed and even flash back occurs, and the target tracking algorithm is not suitable for scenes with limited calculation power and real-time requirements, for example, in a falling object alarm scene, the phenomenon that an object or a camera possibly disappears from a picture acquired by one camera when moving and then appears in a picture acquired by the other camera, and under the scene, the traditional target tracking algorithm cannot judge whether the object or the camera is the same object. Disclosure of Invention Aiming at the technical problems, the invention provides a target tracking and object identification method based on an angle interval. The invention is realized by adopting the following technical scheme: a target tracking and object identification method based on an angle interval comprises the following steps: Step S1, recognizing an object in a shooting picture of a camera based on yolo models, and recording the type of the object and the coordinates of the object in the shooting picture; s2, calculating angle intervals of the identified objects, recording one or more types of objects in each angle interval, and recording only one object with the largest area for a single angle interval in which a plurality of objects of the same type appear; S3, extracting and recording characteristic points of each object stored in each angle interval; And S4, after a new frame of picture is obtained, repeating the steps S1-S3 to obtain new frame of picture data, comparing the recorded result of the new frame of picture with the previous frame, judging the objects with the same category, area difference value within a preset threshold range and distance between characteristic points within the preset distance threshold range in the adjacent angle interval as the same object, and updating the motion trail of the object. Specifically, the step S1 specifically includes the following substeps: s11, preprocessing single-frame image data, loading yolo model by using ncnn deep learning reasoning framework, inputting preprocessed image data and obtaining an output feature map; Step S12, defining the downsampling rate of the multi-scale feature map, covering target detection of different scales, and generating coordinates and corresponding step length of each grid point; s13, decoding boundary frame parameters output by the model, filtering recognition results with confidence coefficient lower than a preset threshold range, obtaining candidate frame data, sorting according to the confidence coefficient, and removing overlapped candidate frames through NMS; and S14, converting the coordinates and length and width of the candidate frames after screening from a model input space to an original image space for coordinate space conversion, and finally generating object boundary frame data. Specifically, the preprocessing of the image data includes image size scaling, image format conversion, and normalization processing. Specifically, the candidate frame data includes the object category, the confidence level, and the coordinates and length and width of the candidate frame. Specifically, the calculating of the angle interval in step S2 includes: Setting a reference angle of a picture acquired by a camera, wherein the leftmost side is set to be 0 degrees, the middle is set to be 90 degrees, and the rightmost side is set to be 180 degrees; Selecting a reference camera, and recording a deflection angle of the camera as an original angle when the reference camera shoots a first frame of picture; and setting the angle interval where the object is actually located as the angle where the object is located in the shooting picture plus the deflection angle of the camera for the picture which is acquired by the reference camera subsequently. The reference camera is selected by taking the camera in front as the reference camera when only one camera exists, and taking the other cameras as non-reference cameras when a plurality of cameras exist. Specifically, in the non-reference camera, the angle section where the object is actually located is set to be the angle where the object is located in the photographed image plus the deflection angle of the camera and the deviation angle of the camera and the photographed image of the reference camera. Specifically, the step S3 of extracting and recording the characteristic points comprises the steps of converting an image in an