CN-121985147-A - Multipath intelligent mixed-flow video method based on AI intelligent bayonet
Abstract
A multipath intelligent mixed-flow video method based on an AI intelligent bayonet belongs to the technical field of artificial intelligence and is used for solving the problems of high bandwidth consumption, poor monitoring instantaneity and the like in multipath video monitoring scenes in the prior art. The method comprises the steps that an AI intelligent bayonet camera collects video streams of a corresponding monitoring area in real time, dynamically detects and classifies video stream pictures, uploads real-time video streams or screenshots with real-time stamps to an intelligent NVR module, the intelligent NVR module stores the received real-time video streams or screenshots with real-time stamps in a classified mode, and pushes corresponding monitoring data to a video checking terminal according to a received checking request, the video checking terminal receives the real-time video streams or the screenshots with real-time stamps pushed by the intelligent NVR module, plays the video streams in real time, and displays the screenshots in a carousel mode. The method has the advantages that bandwidth consumption in a multipath video monitoring scene can be reduced, and real-time performance and effectiveness of monitoring are guaranteed.
Inventors
- JIA WEI
- ZHANG RUI
- WANG GUOFAN
- LIU YANG
- E Bo
- Gao Wuzhou
- PENG YU
- ZHANG TA
- Zhao ao
- HAO XIAOLIANG
Assignees
- 北京市西山试验林场管理处
Dates
- Publication Date
- 20260505
- Application Date
- 20260204
Claims (10)
- 1. A multipath intelligent mixed flow video method based on an AI intelligent bayonet is characterized by comprising the following steps: The method comprises the steps of 1, constructing a multipath intelligent mixed-flow video system based on an AI intelligent bayonet, wherein the multipath intelligent mixed-flow video system is provided with M AI intelligent bayonet cameras, all the AI intelligent bayonet cameras are connected with the same intelligent NVR module, and the intelligent NVR module is also connected with N video viewing terminals; Step 2, each AI intelligent bayonet camera collects video streams of a corresponding monitoring area in real time, then dynamically detects video stream pictures by using a built-in lightweight AI model, and judges a static scene if a dynamic detection result is that no moving object exists; the AI intelligent bayonet camera uploads a real-time video stream to the intelligent NVR module when the moving object is an effective target; When a moving object is an invalid target or a static scene, the AI intelligent bayonet camera generates a screenshot with a real-time timestamp according to a set time interval and uploads the screenshot to the intelligent NVR module; step 3, the intelligent NVR module stores the received real-time video stream or screenshot with the real-time timestamp in a classified mode; Step 4, the video viewing terminal acquires a viewing instruction, generates a viewing request of a target camera according to the viewing instruction, and then sends the viewing request to the intelligent NVR module; Step 5, the intelligent NVR module inquires the state of the target camera according to the checking request, and if the state of the target camera is an effective dynamic scene, the intelligent NVR module forwards a video stream to the video checking terminal in real time; If the video is an invalid dynamic or static scene, the intelligent NVR module pushes a screenshot with a real-time timestamp to the video viewing terminal according to a set time interval; and 6, the video viewing terminal receives the real-time video stream pushed by the intelligent NVR module or the screenshot with the real-time timestamp, if the video stream is the video stream, the video stream is played in real time, and if the video stream is the screenshot, the video stream is displayed in a carousel mode.
- 2. The multi-channel intelligent mixed-flow video method based on the AI intelligent bayonet of claim 1, wherein in the step 2, the lightweight AI model adopts a fusion algorithm of a frame difference method and background modeling to carry out dynamic detection so as to obtain a final motion region mask value; when the judging results of the difference frame method and the background modeling method are both motion areas, judging that the motion areas are real, wherein the final motion area mask value is1, otherwise, the motion areas are static areas, the final motion area mask value is 0, and the expression is: ; Wherein, the Mask values for the final motion regions; A motion region mask representing a frame difference method, The judgment result of the frame difference method is a motion area; Representing a current frame And background model Is used for the difference in (a), Is a background difference threshold; the determination result representing the background modeling method is a motion region.
- 3. The multi-channel intelligent mixed-flow video method based on the AI intelligent bayonet as set forth in claim 2, wherein the specific determination process of the frame difference method is as follows: the lightweight AI model calculates pixel difference values of continuous video frames through a frame difference method and is used for positioning existing motion areas: first, the adjacent frame difference is calculated, i.e. the current frame is calculated With the previous frame Is the difference of (2) And the current frame With the next frame Is the difference of (2) The expression is: ; ; Wherein, the Representing an image In coordinates of Gray value at t represents the current time; Then screening the motion area, taking the intersection of two frames to obtain the mask of the motion area The formula is: ; wherein T represents a pixel threshold; When the difference value of the two frames exceeds the pixel threshold value T, the pixel point is judged to be a motion area, the mask value is 1, otherwise, the pixel point is a static area, and the mask value is 0.
- 4. The multi-channel intelligent mixed-flow video method based on the AI intelligent bayonet as claimed in claim 3, wherein the AI intelligent bayonet camera is further internally provided with an illumination sensor, and the illumination sensor is used for synchronously collecting the ambient illumination intensity data L of the monitoring area in real time while collecting the video stream; The AI intelligent bayonet camera divides illumination intensity into four types of scenes, and matches an exclusive adaptation strategy for each type of scene, so as to provide an environment adaptation foundation for subsequent dynamic detection: (1) The strong light scene, namely L is more than or equal to 10000lux, is started in a strong light inhibition mode, and the range of the pixel threshold value T of the frame difference method is set to be 35,45 and the background difference threshold value The range is [30,35]; (2) The normal light scene is 1000lux < L <10000lux, the standard adaptation mode is adopted, the range of the pixel threshold T of the frame difference method is set to be [25,35], and the background difference threshold is set The range is [20,25]; (3) The weak light scene, namely 100lux is less than or equal to L is less than or equal to 1000lux, the weak light enhancement mode is triggered, the range of the pixel threshold T of the frame difference method is set to be 15,25, and the background difference threshold is set The range is [10,15]; (4) The dim light scene is L <100lux, a dim light noise reduction mode is started, and the range of the pixel threshold T of the frame difference method is set to be [10,20], and the background difference threshold The range is [5,10].
- 5. The multi-channel intelligent mixed-flow video method based on the AI intelligent bayonet as set forth in claim 2, wherein the background model construction and judgment process of the background modeling method is as follows: (1) Initializing background model, when the system is started, taking the average gray value of the previous 100 frames of motion-free images as an initial background model The expression is: ; Wherein k represents the kth frame, k ε [1,100]; (2) The background model is dynamically updated, namely after each frame of image is processed, the background model is dynamically updated according to the current frame With the previous background model And updating a background model by adopting a weighted average strategy, wherein the expression is as follows: ; Wherein, the In order to update the weights, the weights are updated, Updating the background for only static areas; (3) Motion region verification, computing the current frame And background model Is the difference of (2) The expression is: ; Then the difference is Threshold value of difference from background Comparing when When the motion area is determined; otherwise, the static area is determined.
- 6. The method for multi-channel intelligent mixed-flow video based on the AI intelligent card port of claim 1, wherein said intelligent NVR module receives real-time video stream or screenshot with real-time timestamp and simultaneously receives corresponding meta-information, said meta-information comprises illumination intensity, camera ID and state label thereof; The request content of the view request comprises a target camera ID, a view type and an illumination scene, wherein the view type comprises two types of real-time view and historical view, and the illumination scene comprises a strong light scene, a normal light scene, a weak light scene and a dark light scene.
- 7. The method of claim 6, wherein the intelligent NVR module classifies and stores the data according to the state label of the data collected by the AI intelligent bayonet camera, namely the associated illumination level label: For video stream data with a state label of an effective dynamic scene, the intelligent NVR module adopts a layered storage strategy, namely the data is stored in a solid state disk SSD according to a three-level directory structure of 'camera ID/effective dynamic/illumination level'; For the screenshot data of which the state label is an invalid dynamic or static scene, the intelligent NVR module directly stores the screenshot data into a mechanical hard disk according to a three-level directory structure of 'camera ID/static/illumination level', and automatically generates a corresponding index file per hour so as to record the time stamp and the storage path information of the screenshot in the period.
- 8. The multi-channel intelligent mixed-flow video method based on the AI intelligent bayonet of claim 6, wherein after receiving the view request, the intelligent NVR module queries a camera state table according to a camera ID in a request parameter to obtain a current scene state and a latest data timestamp of a corresponding monitoring area; When the query result shows that the current scene state is effective and dynamic, if the view request does not specify the illumination scene screening condition, the intelligent NVR module invokes real-time video stream data from the corresponding storage path and synchronously associates illumination scene annotation information, and pushes the video stream and the illumination scene annotation to the video view terminal in a protocol mode through the video forwarding module; When a plurality of video viewing terminals simultaneously request to view the same path of cameras, the intelligent NVR module adopts a multiplexing mechanism of single-stream multi-push, namely only one path of video source input is maintained, and multi-terminal concurrent viewing is realized through multiplexing; When the query result shows that the current scene state is a static or invalid dynamic scene, if the view request does not specify the illumination scene screening condition, the intelligent NVR module reads a screenshot file with a real-time timestamp and associated illumination information from a corresponding storage path, and verifies the difference value between the screenshot timestamp and the current system time of the video viewing terminal, if the time difference is smaller than or equal to a set time interval, the screenshot+illumination label is judged to be effective data and is directly pushed to the video viewing terminal, if the time difference exceeds the set time interval, the front-end camera is triggered to update the screenshot and the real-time illumination data and then push the real-time illumination data so as to ensure that a static picture received by the rear-end camera has time effectiveness, and if the rear-end request contains the illumination scene screening condition, the intelligent NVR module screens the latest effective screenshot under the corresponding illumination level based on illumination intensity information in the index file and pushes the associated label to the video viewing terminal.
- 9. The multi-channel intelligent mixed-flow video method based on the AI intelligent bayonet as set forth in claim 1, wherein the intelligent NVR module is internally provided with a real-time synchronization module, and the real-time synchronization module is used for performing time synchronization on video streams or screenshots acquired by a front-end AI intelligent bayonet camera and output pictures of a rear-end video viewing terminal, so as to ensure that the rear-end video viewing terminal accurately displays the current state of a monitoring area.
- 10. The multi-channel intelligent mixed-flow video method based on the AI intelligent bayonet according to claim 1, wherein the video viewing terminal obtains the viewing type, the target camera ID, the illumination scene and the time parameter through an interactive interface, generates a viewing request according to the obtained viewing type, the target camera I D and the illumination scene and the time parameter, and sends the viewing request to the intelligent NVR module; after receiving the data returned by the intelligent NVR module, the video viewing terminal analyzes the data, maps the data of different cameras to corresponding display windows, and realizes parallel loading of multiple paths of pictures.
Description
Multipath intelligent mixed-flow video method based on AI intelligent bayonet Technical Field The invention relates to the technical field of artificial intelligence, in particular to a multipath intelligent mixed-flow video method based on an AI intelligent bayonet. Background The existing multipath video monitoring method is mainly based on the traditional camera full-scale transmission, the NVR storage forwarding of a common network video recorder or the simple dynamic detection transmission technology. In the traditional full-volume video transmission scheme, a front-end camera continuously uploads a complete video stream to a rear end or NVR no matter whether effective dynamic exists in a monitoring scene, such as a person, a car and an animal. When multiple cameras run simultaneously and multiple back-end terminals look at simultaneously, the mode can generate great bandwidth consumption, namely, a single 1080P real-time video needs 2-4Mbps bandwidth, 10 cameras need 20-40Mbps for concurrent transmission, and 3 back-end terminals look at simultaneously, the bandwidth requirement is overlapped to 60-120Mbps, network congestion and transmission delay are easily caused, a large amount of meaningless dynamics such as blowing vegetation and light and shadow changing video streaming belongs to resource waste, and storage and network cost is increased. The method also adopts a fixed interval screenshot to replace part of video transmission so as to reduce bandwidth consumption, but the scheme does not design front-end dynamic identification linkage, the screenshot interval is fixed and a real-time timestamp is not embedded, the bandwidth is still wasted, the screenshot interval is too short in a static scene, key information is easy to miss, the screenshot timeliness cannot be intuitively judged when the rear end checks, the time is required to be checked manually, the monitoring efficiency is influenced, and in addition, the scheme has the advantages that the front end and the NVR lack a time synchronization mechanism, the screenshot timestamp is easy to deviate, and the reliability of monitoring data is further reduced. The traditional methods have the limitations of large total transmission bandwidth consumption, aggravated problems when multiple terminals are concurrent, incomplete bandwidth saving, lack of flexibility and time relevance of a fixed screenshot scheme, insufficient monitoring effectiveness, no differential scheduling and single-flow multi-push capability of common NVR, low resource utilization rate, difficult retention and timeliness of static scene data, influence on the efficiency of post-tracing and real-time monitoring, influence of environment illumination change on background modeling, and easiness in false detection such as illumination interference under strong light, dim light and other scenes, namely, judgment of a moving target, incapability of identifying a missed real target due to insufficient illumination, and poor all-weather monitoring robustness. Disclosure of Invention The multi-channel intelligent mixed-flow video method based on the AI intelligent bayonet can reduce bandwidth consumption in a multi-channel video monitoring scene and simultaneously ensure real-time performance and effectiveness of monitoring. In order to achieve the above purpose, the multi-channel intelligent mixed-flow video method based on the AI intelligent card port provided by the invention is characterized by comprising the following steps: The method comprises the steps of 1, constructing a multipath intelligent mixed-flow video system based on an AI intelligent bayonet, wherein the multipath intelligent mixed-flow video system is provided with M AI intelligent bayonet cameras, all the AI intelligent bayonet cameras are connected with the same intelligent NVR module, and the intelligent NVR module is also connected with N video viewing terminals; Step 2, each AI intelligent bayonet camera collects video streams of a corresponding monitoring area in real time, then dynamically detects video stream pictures by using a built-in lightweight AI model, and judges a static scene if a dynamic detection result is that no moving object exists; the AI intelligent bayonet camera uploads a real-time video stream to the intelligent NVR module when the moving object is an effective target such as a person, a car or an animal; when a moving object is an invalid target such as a tree, grass or the like or a static scene, the AI intelligent bayonet camera generates a screenshot with a real-time timestamp according to a set time interval such as 30 seconds and uploads the screenshot to the intelligent NVR module; step 3, the intelligent NVR module stores the received real-time video stream or screenshot with the real-time timestamp in a classified mode; Step 4, the video viewing terminal acquires a viewing instruction, generates a viewing request of a target camera according to the viewing instruction, and then se