CN-121999053-A - Impact point positioning method and system based on machine vision
Abstract
The invention discloses a method and a system for positioning impact points based on machine vision, wherein the method comprises the steps of synchronously acquiring a transient image sequence of a bullet breakdown target surface through a multi-angle high-speed camera arranged in front of a shooting target position and transmitting the transient image sequence to an edge computing node; the method comprises the steps of carrying out dynamic frame difference analysis on a transient image sequence at an edge calculation node, extracting continuous frame images containing bullet hole deformation characteristics, adopting a lightweight convolutional neural network to identify bullet hole contours, calculating three-dimensional space coordinates of the bullet hole center relative to a bullnose based on the identified bullet hole contours by combining camera calibration parameters and a perspective projection model to generate impact point position data, automatically calculating shooting ring numbers according to the impact point position data and combining a preset ring number division model to generate a visual training report containing ballistic distribution and shooting stability analysis. By utilizing the embodiment of the invention, the non-contact, high-precision and real-time positioning of the impact point can be realized, and the environmental interference can be effectively overcome.
Inventors
- Jin Lihan
- Sheng tianyang
- FANG SHENGYU
Assignees
- 杭州富凌科技有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20260410
Claims (10)
- 1. A machine vision-based impact point positioning method, the method comprising: synchronously acquiring a transient image sequence of a bullet breakdown target surface through a multi-angle high-speed camera arranged in front of a shooting target position, and transmitting the transient image sequence to an edge computing node; Performing dynamic frame difference analysis on the transient image sequence at an edge computing node, extracting continuous frame images containing bullet hole deformation characteristics, and identifying bullet hole contours by adopting a lightweight convolutional neural network; based on the identified bullet hole outline, calculating the three-dimensional space coordinate of the bullet hole center relative to the bulls-eye by combining the camera calibration parameters and the perspective projection model, and generating impact point position data; and according to the impact point position data, combining a preset ring number division model, automatically calculating the shooting ring number and generating a visual training report containing trajectory distribution and shooting stability analysis.
- 2. The method of claim 1, wherein the synchronizing the acquisition of the sequence of transient images of bullet puncture targets by a multi-angle high-speed camera disposed in front of the shooting target and transmitted to an edge computing node comprises: A plurality of high-speed cameras are deployed in front of a shooting target, target surfaces are aligned at different angles, and a synchronous trigger device is configured, so that all cameras can start to collect at the same time at the moment that a bullet breaks through the target surfaces, and synchronous trigger signals are generated; Controlling each high-speed camera to acquire images at a rate of more than 1000 frames per second according to the synchronous trigger signal, and transmitting original image data to an edge computing node in real time through a gigabit Ethernet to generate a plurality of paths of original image streams; Receiving multiple paths of original image streams at an edge computing node, and performing accurate time synchronization on image frames from different cameras by adopting a time stamp alignment algorithm to generate a time-synchronized multi-view image sequence; and performing preliminary quality detection on the time-synchronous multi-view image sequence, removing damaged frames caused by transmission errors, compensating the missing frames through interpolation, and finally generating a complete transient image sequence.
- 3. The method of claim 2, wherein the performing dynamic frame difference analysis on the transient image sequence at the edge computing node extracts continuous frame images containing bullet hole deformation features and identifies bullet hole contours using a lightweight convolutional neural network, comprising: Carrying out Gaussian filtering denoising treatment on the transient image sequence, calculating the sum of absolute differences between adjacent frames, and generating a frame difference energy map sequence; Setting a dynamic threshold value based on a frame difference energy image sequence, extracting continuous frame image segments with frame difference energy exceeding the threshold value, wherein the image segments comprise complete deformation characteristics of a bullet hole forming process, and generating candidate bullet hole image segments; Inputting the candidate bullet hole image segments into a lightweight convolutional neural network, wherein the network adopts MobileNetV architecture, extracts multi-scale features of bullet hole areas through depth separable convolutional layers, and generates initial feature mapping; Non-maximum suppression and edge detection algorithms are applied to the initial feature map to identify bullet hole boundary points and to generate a smoothed bullet hole profile by curve fitting.
- 4. The method of claim 3, wherein calculating three-dimensional spatial coordinates of the center of the bullet hole relative to the bulls-eye based on the identified bullet hole contour in combination with the camera calibration parameters and the perspective projection model to generate the impact point position data comprises: Loading calibration parameters of each high-speed camera, wherein the calibration parameters comprise an internal reference matrix, a distortion coefficient and an external reference matrix, and establishing a conversion relation between a camera coordinate system and a world coordinate system to generate camera calibration data; Projecting the contour curve of the bullet hole into an image coordinate system, calculating the mass center of the contour as the two-dimensional coordinate of the bullet hole center in the image, and generating a bullet hole center two-dimensional coordinate set under multiple visual angles; Based on camera calibration data and a perspective projection model, triangulating a two-dimensional coordinate set of the center of the bullet hole by utilizing a multi-view geometric principle, calculating a three-dimensional space coordinate of the center of the bullet hole in a world coordinate system, and generating an initial three-dimensional coordinate; and (3) carrying out optimization correction on the initial three-dimensional coordinates, solving an optimal solution by a least square method in consideration of lens distortion and measurement errors, and finally generating accurate impact point position data.
- 5. The method of claim 4, wherein automatically calculating the shot ring number and generating a visual training report including ballistic distribution and shot stability analysis based on the impact point position data in combination with a preset ring number division model, comprises: Loading a preset ring number dividing model, defining a radius threshold value and a score of each ring number of the target surface, matching the impact point position data with the ring number dividing model, and calculating shooting ring number scores; Calculating concentration index and scattering characteristics of impact points based on the impact point position data sequence, wherein the concentration index and scattering characteristics comprise average deviation and standard error, and generating a ballistic distribution statistical analysis result; According to the method, the shooting stability is evaluated through time sequence analysis by combining the ring number score and the ballistic distribution statistical analysis result, the ring number fluctuation and trend of continuous shooting are calculated, and a shooting stability evaluation index is generated; integrating the ring number score, the ballistic distribution statistical analysis result and the shooting stability evaluation index, and generating a visual training report containing a thermodynamic diagram, a scatter diagram and a trend diagram by adopting a data visual technology.
- 6. A machine vision-based impact point positioning system, the system comprising: The acquisition module is used for synchronously acquiring transient image sequences of the bullet puncture target surface through a multi-angle high-speed camera arranged in front of the shooting target position and transmitting the transient image sequences to the edge calculation node; The extraction module is used for carrying out dynamic frame difference analysis on the transient image sequence at the edge computing node, extracting continuous frame images containing the bullet hole deformation characteristics, and adopting a lightweight convolutional neural network to identify the bullet hole outline; the calculation module is used for calculating the three-dimensional space coordinate of the center of the bullet hole relative to the bulls-eye based on the recognized bullet hole outline and combining the camera calibration parameters and the perspective projection model to generate impact point position data; And the generation module is used for automatically calculating the shooting ring number according to the impact point position data and combining a preset ring number division model and generating a visual training report containing trajectory distribution and shooting stability analysis.
- 7. The system according to claim 6, wherein the acquisition module is specifically configured to: A plurality of high-speed cameras are deployed in front of a shooting target, target surfaces are aligned at different angles, and a synchronous trigger device is configured, so that all cameras can start to collect at the same time at the moment that a bullet breaks through the target surfaces, and synchronous trigger signals are generated; Controlling each high-speed camera to acquire images at a rate of more than 1000 frames per second according to the synchronous trigger signal, and transmitting original image data to an edge computing node in real time through a gigabit Ethernet to generate a plurality of paths of original image streams; Receiving multiple paths of original image streams at an edge computing node, and performing accurate time synchronization on image frames from different cameras by adopting a time stamp alignment algorithm to generate a time-synchronized multi-view image sequence; and performing preliminary quality detection on the time-synchronous multi-view image sequence, removing damaged frames caused by transmission errors, compensating the missing frames through interpolation, and finally generating a complete transient image sequence.
- 8. The system according to claim 7, wherein the extraction module is specifically configured to: Carrying out Gaussian filtering denoising treatment on the transient image sequence, calculating the sum of absolute differences between adjacent frames, and generating a frame difference energy map sequence; Setting a dynamic threshold value based on a frame difference energy image sequence, extracting continuous frame image segments with frame difference energy exceeding the threshold value, wherein the image segments comprise complete deformation characteristics of a bullet hole forming process, and generating candidate bullet hole image segments; Inputting the candidate bullet hole image segments into a lightweight convolutional neural network, wherein the network adopts MobileNetV architecture, extracts multi-scale features of bullet hole areas through depth separable convolutional layers, and generates initial feature mapping; Non-maximum suppression and edge detection algorithms are applied to the initial feature map to identify bullet hole boundary points and to generate a smoothed bullet hole profile by curve fitting.
- 9. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method of any of claims 1-5 when run.
- 10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of claims 1-5.
Description
Impact point positioning method and system based on machine vision Technical Field The invention belongs to the technical field of machine vision, and particularly relates to a method and a system for positioning impact points based on machine vision. Background In traditional shooting training, the positioning and ring number judgment of impact points mainly depend on manual target detection. The method is low in efficiency, has subjective errors, and cannot provide real-time data feedback for a shooter in the training process. To improve efficiency and automation, some automatic target reporting systems based on the acoustoelectric principle or a single vision sensor are presented. However, the systems still have the limitations in practical application that the acousto-electric system is easy to be interfered by environment and has limited positioning precision, while the traditional single vision scheme has insufficient stability under complex illumination conditions, has poor robustness on transient deformation, smoke dust and other interference factors caused by the penetration of the bullet into the target surface, and is difficult to capture and analyze the bullet hole characteristics in real time with high precision and high reliability. In addition, the prior art is usually focused on simple ring number judgment, lacks of excavation and visual presentation of deep training data such as ballistic distribution, shooting stability and the like, and is difficult to meet the high standard requirements of modern and data shooting training. Disclosure of Invention The invention aims to provide a method and a system for positioning impact points based on machine vision, which are used for solving the defects in the prior art, realizing non-contact, high-precision and real-time positioning of the impact points and effectively overcoming environmental interference. One embodiment of the present application provides a machine vision-based impact point positioning method, which includes: synchronously acquiring a transient image sequence of a bullet breakdown target surface through a multi-angle high-speed camera arranged in front of a shooting target position, and transmitting the transient image sequence to an edge computing node; Performing dynamic frame difference analysis on the transient image sequence at an edge computing node, extracting continuous frame images containing bullet hole deformation characteristics, and identifying bullet hole contours by adopting a lightweight convolutional neural network; based on the identified bullet hole outline, calculating the three-dimensional space coordinate of the bullet hole center relative to the bulls-eye by combining the camera calibration parameters and the perspective projection model, and generating impact point position data; and according to the impact point position data, combining a preset ring number division model, automatically calculating the shooting ring number and generating a visual training report containing trajectory distribution and shooting stability analysis. Optionally, through arranging the high-speed camera of multi-angle in shooting target position the place ahead, the transient state image sequence that the bullet puncture target surface is gathered to the synchronization to the transmission is calculated the node to the edge, include: A plurality of high-speed cameras are deployed in front of a shooting target, target surfaces are aligned at different angles, and a synchronous trigger device is configured, so that all cameras can start to collect at the same time at the moment that a bullet breaks through the target surfaces, and synchronous trigger signals are generated; Controlling each high-speed camera to acquire images at a rate of more than 1000 frames per second according to the synchronous trigger signal, and transmitting original image data to an edge computing node in real time through a gigabit Ethernet to generate a plurality of paths of original image streams; Receiving multiple paths of original image streams at an edge computing node, and performing accurate time synchronization on image frames from different cameras by adopting a time stamp alignment algorithm to generate a time-synchronized multi-view image sequence; and performing preliminary quality detection on the time-synchronous multi-view image sequence, removing damaged frames caused by transmission errors, compensating the missing frames through interpolation, and finally generating a complete transient image sequence. Optionally, the performing dynamic frame difference analysis on the transient image sequence at the edge computing node, extracting continuous frame images including the bullet hole deformation feature, and identifying the bullet hole outline by adopting a lightweight convolutional neural network, including: Carrying out Gaussian filtering denoising treatment on the transient image sequence, calculating the sum of absolute differences betwe