Search

CN-116994177-B - Video analysis method, device, equipment and medium

CN116994177BCN 116994177 BCN116994177 BCN 116994177BCN-116994177-B

Abstract

The embodiment of the application provides a video analysis method, a device, equipment and a medium, which are used for solving the problem of wasting the calculation force of a chip of back-end equipment in the prior art. The method includes the steps of extracting a plurality of first video frames from a video stream comprising a target object according to a currently stored frame extraction rate and the last extracted end position, dividing each first video frame into a plurality of sub-areas, counting the sub-areas where the target object appears in each first video frame, and analyzing sub-images of the sub-areas where the target object appears in each first video frame. Determining each sub-region where the target object appears in each first video frame, and then analyzing and processing the sub-image corresponding to each sub-region, so that the computational waste of the back-end device can be reduced, and especially, the situation that the computational waste of the back-end device can be effectively reduced in the scene that the targets in the front-end code stream are discrete or the occurrence frequency is low.

Inventors

  • HU HONG

Assignees

  • 浙江大华技术股份有限公司

Dates

Publication Date
20260508
Application Date
20230720

Claims (9)

  1. 1. A method of video analysis, the method comprising: Extracting a plurality of first video frames from a video stream comprising a target object according to the currently stored frame extraction rate and the last extraction end position; Dividing each first video frame into a plurality of subareas, and counting subareas of each first video frame, wherein the subareas appear in the target object; analyzing sub-images of the sub-regions where the target object appears in each first video frame; the currently stored frame extraction rate is determined according to the following method: Based on the stored last frame extraction frame rate, acquiring a plurality of second video frames contained in each time period in a plurality of preset time periods; For each time period, determining a first number of sub-areas in which the target object appears in the time period and a first number of times of the target object appearing in each sub-area according to the number of sub-areas in each second video frame in the time period; determining an average frequency corresponding to the time period according to the first times and the first quantity; determining a distribution proportion corresponding to the time period according to the first quantity and the total number of the subregions contained in each second video frame; And determining the currently stored frame rate of the frame extraction according to at least one adjustment amount in the average frequency and the distribution proportion corresponding to each time period.
  2. 2. The method according to claim 1, wherein the method further comprises: Determining the score corresponding to each sub-image in any first video frame according to the characteristics of the target object contained in the sub-image obtained by analyzing the sub-image and the pre-stored sub-score corresponding to each characteristic; Determining a scoring result corresponding to each sub-region in the plurality of sub-regions according to the scoring of the sub-image corresponding to the sub-region in each first video frame; and determining a corresponding target area image in each first video frame according to the subareas with the scoring result larger than the threshold value, and analyzing the target area image in each first video frame.
  3. 3. The method according to claim 1, wherein determining the currently stored frame rate of the frame extraction according to the average frequency and the distribution ratio corresponding to each time period comprises: Determining whether the average frequency corresponding to the time periods is reduced or increased in sequence according to the time sequence according to the average frequency corresponding to each time period; Determining whether the distribution proportion of the time periods corresponding to the time sequence is reduced or increased in sequence according to the distribution proportion of the time periods corresponding to the time periods; If at least one adjustment amount in the average frequency and the distribution proportion corresponding to the time periods is reduced in sequence, reducing the last frame extraction rate to obtain the current frame extraction rate and storing the current frame extraction rate; and when at least one adjustment amount in the average frequency and the distribution proportion corresponding to the time periods is sequentially increased, increasing the last frame rate to obtain the current frame rate and storing the current frame rate.
  4. 4. The method of claim 3, wherein reducing the last frame rate to obtain the current frame rate comprises: determining a first target difference value of adjustment amounts corresponding to the first time period and the last time period; determining a first target frame rate adjustment value corresponding to the first target difference value according to the first target difference value and the corresponding relation between the stored difference value and the frame rate adjustment value; and reducing the last frame rate according to the last frame rate and the first target frame rate adjustment value.
  5. 5. The method of claim 3, wherein increasing the last frame rate to obtain the current frame rate comprises: determining a second target difference value of the adjustment amounts corresponding to the first time period and the last time period; determining a second target frame rate adjustment value corresponding to the second target difference value according to the second target difference value and the corresponding relation between the stored difference value and the frame rate adjustment value; and increasing the last frame rate according to the last frame rate and the second target frame rate adjustment value.
  6. 6. The method of claim 1, wherein counting the number of sub-regions in each first video frame in which the target object appears, comprises: Judging whether the subarea of each first video frame where the target object appears is positioned in a preset area or not, wherein the preset area comprises a set safety area and/or a set shielding area; if not, continuing to count the subareas where the target object appears in each first video frame.
  7. 7. A video analysis device, the device comprising: the extraction module is used for extracting a plurality of first video frames from a video stream comprising a target object according to the currently stored frame extraction frame rate and the last extraction end position; The statistics module is used for dividing each first video frame into a plurality of subareas and counting the subareas where the target object appears in each first video frame; The analysis module is used for analyzing the sub-images of the sub-areas where the target object appears in each first video frame; The extraction module is specifically configured to obtain a plurality of second video frames contained in each time period of a preset plurality of time periods based on a stored last frame extraction frame rate, divide each second video frame into a plurality of sub-areas, determine, for each time period, a first number of sub-areas in which the target object appears in the time period and a first number of times in which the target object appears in each sub-area according to the number of sub-areas in each second video frame in the time period, determine, according to the first number and the first number, an average frequency corresponding to the time period, determine, according to the first number and a total number of sub-areas contained in each second video frame, a distribution proportion corresponding to the time period, and determine, according to at least one adjustment amount in the average frequency and the distribution proportion corresponding to each time period, the current stored frame extraction frame rate.
  8. 8. An electronic device comprising at least a processor and a memory, the processor being adapted to implement the steps of the video analysis method according to any of claims 1-6 when executing a computer program stored in the memory.
  9. 9. A computer storage medium, characterized in that it stores a computer program executable by an electronic device, which when run on the electronic device causes the electronic device to perform the steps of the video analysis method of any one of claims 1-6.

Description

Video analysis method, device, equipment and medium Technical Field The present invention relates to the field of image processing technologies, and in particular, to a video analysis method, apparatus, device, and medium. Background In the current video monitoring field, as shown in fig. 1, a back-end device, such as a network video recorder (Network Video Recorder, NVR), may access a plurality of front-end devices, access front-end code streams of different front-end devices by dividing different channels, and then respectively perform different types of analysis on the front-end code streams corresponding to the different channels, so as to generate an analysis result and report the analysis result. At present, the distribution mode when analyzing the front-end code stream corresponding to the channel is based on the binding mode of the channel and the analyzer, as shown in fig. 2, the channel 1 corresponds to the analyzer 1, and each analyzer decodes and analyzes the front-end code stream corresponding to the channel to obtain the target and the attribute of each front-end code stream, and reports the obtained target and attribute. And once a certain channel starts a certain analyzer, the analyzer is always occupied to analyze the front-end code stream corresponding to the channel in real time. Furthermore, each back-end device has a limited overall analysis capability, i.e., a limited overall calculation power, due to the hardware limitations of its own chip specification. When the back-end equipment distributes the calculation power, the corresponding calculation power is distributed according to the resolution of the video frames in the front-end code stream, generally, when the resolution of the video frames in the front-end code stream is larger, the calculation power distributed by the back-end equipment to the video frames is larger, and when the resolution of the video frames in the front-end code stream is smaller, the calculation power distributed by the back-end equipment to the video frames is smaller. However, for some higher resolution video frames, although the resolution of the video frame is higher, there are no or fewer valid targets in the video frame, because the backend device analyzes the entire video frame when analyzing the video frame, the backend device still allocates more computing power when allocating computing power to the video frame. Therefore, in the scene that the resolution of the video frame in the front-end code stream is larger, but the targets are more discrete or the occurrence frequency is not high, the calculation force of the chip of the back-end device is wasted greatly when the whole video frame is analyzed through the back-end device. Disclosure of Invention The embodiment of the application provides a video analysis method, a device, equipment and a medium, which are used for solving the problem of wasting the calculation force of a chip of back-end equipment in the prior art. In a first aspect, an embodiment of the present application provides a video analysis method, where the method includes: Extracting a plurality of first video frames from a video stream comprising a target object according to the currently stored frame extraction rate and the last extraction end position; dividing each first video frame into a plurality of subareas, counting the subareas where the target object appears in each first video frame, and analyzing the subareas where the target object appears in each first video frame. In a second aspect, an embodiment of the present application further provides a video analysis apparatus, including: the extraction module is used for extracting a plurality of first video frames from a video stream comprising a target object according to the currently stored frame extraction frame rate and the last extraction end position; the statistics module is used for dividing each first video frame into a plurality of subareas and counting the subareas where the target object appears in each first video frame; And the analysis module is used for analyzing the sub-images of the sub-regions where the target object appears in each first video frame. In a third aspect, an embodiment of the present application further provides an electronic device, where the electronic device includes at least a processor and a memory, where the processor is configured to implement the steps of the video analysis method according to any one of the preceding claims when executing a computer program stored in the memory. In a fourth aspect, embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the video analysis method as described in any one of the above. In the embodiment of the application, a plurality of first video frames are extracted from a video stream comprising target objects according to the currently stored frame extraction rate and the last ex