US-20260127883-A1 - CONTROL METHOD AND APPARATUS FOR BROADCAST MONITORING SYSTEM AND COMPUTING AND MEMORY SYSTEM
Abstract
The present disclosure provides a control method and apparatus for a broadcast monitoring system, a computer device and a storage medium, and belongs to the field of image recognition and terminal broadcast monitoring technology. The control method includes: acquiring an image to be detected; dividing the image to be detected into a plurality of target sub-images according to a shooting visual angle of a shooting device for shooting the image to be detected; processing each of the target sub-images by using a pre-trained target neural network model to obtain a recognition result for the target sub-image; and obtaining a detection result of whether a target object exists in the image to be detected or not based on the recognition result corresponding to each target sub-image, and sending the detection result to a terminal so that the terminal determines a display state at least based on the detection result.
Inventors
- Tingting Wang
- Guangwei HUANG
Assignees
- Beijing Boe Technology Development Co., Ltd.
- BOE TECHNOLOGY GROUP CO., LTD.
Dates
- Publication Date
- 20260507
- Application Date
- 20230531
Claims (18)
- 1 . A control method for a broadcast monitoring system, comprising: acquiring an image to be detected; dividing the image to be detected into a plurality of target sub-images according to a shooting visual angle of a shooting device for shooting the image to be detected; processing each of the plurality of target sub-images by using a pre-trained target neural network model to obtain a recognition result for the target sub-image; and obtaining a detection result of whether a target object exists in the image to be detected or not based on the recognition result corresponding to each target sub-image, and sending the detection result to a terminal, so that the terminal determines a display state at least based on the detection result.
- 2 . The control method according to claim 1 , wherein dividing the image to be detected into the plurality of target sub-images according to the shooting visual angle of the shooting device, comprises: dividing the image to be detected into a first sub-region, a second sub-region and a third sub-region arranged sequentially along a first direction, in response to the shooting visual angle being within a preset visual angle range, wherein the first sub-region and the third sub-region partially overlap with the second sub-region, a width of the first sub-region in the first direction is equal to a width of the third sub-region in the first direction, and a width of the second sub-region in the first direction is greater than the width of the first sub-region in the first direction, and the first direction is a height direction of the image to be detected; dividing a part of the image to be detected in the first sub-region into a plurality of first sub-images arranged side by side along a second direction, wherein widths of at least part of the first sub-images in the second direction are the same, and the second direction is a width direction of the image to be detected; dividing a part of the image to be detected in the second sub-region into a plurality of second sub-images arranged side by side along the second direction, wherein widths of at least part of the second sub-images in the second direction are the same; and dividing a part of the image to be detected in the third sub-region into a plurality of third sub-images arranged side by side along the second direction, wherein widths of at least part of the third sub-images in the second direction are the same, and each of the plurality of target sub-images comprises a first sub-image, a second sub-image, and a third sub-image; wherein a width of the third sub-image in the second direction is larger than a width of the second sub-image in the second direction, which is larger than the width of the first sub-image in the second direction.
- 3 . The control method according to claim 2 , wherein the adjacent first sub-images at least partially overlap with each other in the second direction; the adjacent second sub-images at least partially overlap with each other in the second direction; and the third sub-images do not overlap with each other in the second direction.
- 4 . The control method according to claim 3 , wherein a ratio of the width of the second sub-region in the first direction to the width of the first sub-region in the first direction is 2:1; and a ratio of the widths of the third sub-image and the second sub-image in the second direction is 3:2, and a ratio of the widths of the second sub-image and the first sub-image in the second direction is 4:3.
- 5 . The control method according to claim 3 , wherein in the plurality of first sub-images arranged side by side in the second direction, the remaining first sub-images, except for a 1 st first sub-image and a last first sub-image arranged side by side in the second direction, have a same width in the second direction; the 1 st first sub-image and the last first sub-image have a same width in the second direction smaller than the width of the remaining first sub-image in the second direction; and a ratio of an overlapping width of two adjacent first sub-images in the second direction to the width of the remaining first sub-image in the second direction is 1:10; in the plurality of second sub-images arranged side by side in the second direction, the remaining second sub-images, except for the 1 st second sub-image and the last second sub-image arranged side by side in the second direction, have the same width in the second direction; the 1 st second sub-image and the last second sub-image have the same width in the second direction, which is smaller than the width of the remaining second sub-images in the second direction; and a ratio of an overlapping width of two adjacent second sub-images in the second direction to the width of the remaining second sub-images in the second direction is 1:10; in the plurality of third sub-images arranged side by side in the second direction, the remaining third sub-images, except for a 1 st third sub-image and a last third sub-image arranged side by side in the second direction, have a same width in the second direction; the 1 st third sub-image and the last third sub-image have a same width in the second direction smaller than the width of the remaining third sub-image in the second direction; a ratio of an overlapping width of two adjacent third sub-images in the second direction to the width of the remaining third sub-image in the second direction is 1:10; and a ratio of an overlapping width in the first direction of a first sub-image and a second sub-image adjacent to each other in the first direction to the width of the second sub-image in the first direction is 1:10, and a ratio of an overlapping width in the first direction of a third sub-image and a second sub-image adjacent to each other in the first direction to the width of the second sub-image in the first direction is 1:10.
- 6 . The control method according to claim 1 , wherein the target neural network model is trained by following steps: acquiring a first training data set and a second training data set, wherein the second training data set is obtained by filtering the first training data set; training a teacher machine learning model to be trained according to the first training data set to obtain a preliminarily trained teacher machine learning model; training the preliminarily trained teacher machine learning model according to the second training data set to obtain the trained teacher machine learning model; and training a student machine learning model to be trained by adopting a knowledge distillation training method according to the second training data set and the trained teacher machine learning model, to obtain the trained student machine learning model as the target neural network model.
- 7 . The control method according to claim 6 , wherein the second training data set comprises a plurality of training images labeled with the sample labels; training the student machine learning model to be trained by adopting the knowledge distillation training method according to the second training data set and the trained teacher machine learning model, to obtain the trained student machine learning model as the target neural network model, comprises: inputting the training image into the trained teacher machine learning model to obtain a first output result for the trained teacher machine learning model; inputting the training image into the student machine learning model to be trained to obtain a second output result for the student machine learning model to be trained; determining a first loss function according to the first output result and the second output result; determining a second loss value according to the second output result and the sample label of the training image; obtaining a weighted loss function according to the first loss function and the second loss function; and adjusting parameters of the student machine learning model to be trained according to the weighted loss function until the weighted loss function is converged, to obtain the trained student machine learning model as the target neural network model.
- 8 . The control method according to claim 7 , wherein acquiring the first training data set comprises: acquiring an original data set, wherein the original data set comprises a plurality of initial sample images; performing a target object recognition on the initial sample image; and determining a first reference frame containing the target object in response to the target object being present; updating a position of the first reference frame according to position information of the first reference frame to obtain a second reference frame; determining an overlapping degree of the second reference frame and the first reference frame according to position information of the second reference frame and the position information of the first reference frame; and labeling a sample label for a partial sample image of the initial sample images in the second reference frame according to a comparison result between the overlapping degree and a first preset threshold, and taking the partial sample image labeled with the sample label as the training image in the first training data set.
- 9 . The control method according to claim 8 , wherein updating the position of the first reference frame according to the position information of the first reference frame to obtain the second reference frame, comprises: moving a specific coordinate point in the first reference frame according to the position information of the first reference frame to obtain a third reference frame; adjusting a width and a height of the third reference frame based on a preset scaling factor according to position information of the third reference frame by taking a central point of the third reference frame as a center, to obtain a fourth reference frame; and determining a new central point according to position information of the fourth reference frame and first preset central point range; and adjusting a width and a height of the fourth reference frame according to the new central point and a preset cropping range to obtain the second reference frame.
- 10 . The control method according to claim 8 , wherein labeling the sample label for the partial sample image of the initial sample images in the second reference frame according to the comparison result between the overlapping degree and the first preset threshold, comprises: in response to the overlapping degree being greater than or equal to the first preset threshold, generating a first preset range according to a first preset probability, generating a second preset range according to a second preset probability, generating a third preset range according to a third preset probability, and generating a fourth preset range according to a fourth preset probability; wherein the first preset probability is greater than the second preset probability, the second preset probability is greater than the third preset probability, and the third preset probability is greater than or equal to the fourth preset probability; a sum of the first preset probability, the second preset probability, the third preset probability and the fourth preset probability is 1; in response to the overlapping degree being within the first preset range, determining and labeling a partial sample image of the initial sample image in the second reference frame as a positive sample label, so that a ratio of a number of training images with the overlapping degree within the first preset range to a total number of training images with the positive sample labels in the first training data set is the first preset probability; in response to the overlapping degree being within the second preset range, determining and labeling a partial sample image of the initial sample images in the second reference frame as a positive sample label, so that a ratio of a number of training images with the overlapping degree within the second preset range to a total number of training images with the positive sample labels in the first training data set is the second preset probability; in response to the overlapping degree being within the third preset range, determining and labeling a partial sample image of the initial sample image in the second reference frame as a positive sample label, so that a ratio of a number of training images with the overlapping degree within the third preset range to a total number of training images with the positive sample labels in the first training data set is the third preset probability; and in response to the overlapping degree being within the fourth preset range, determining and labeling a partial sample image of the initial sample image in the second reference frame as a positive sample label, so that a ratio of a number of training images with the overlapping degree within the fourth preset range to a total number of training images with the positive sample labels in the first training data set is the fourth preset probability; wherein the overlapping degree in the first preset range is smaller than the overlapping degree in the second preset range, the overlapping degree in the second preset range is smaller than the overlapping degree in the third preset range, and the overlapping degree in the third preset range is smaller than the overlapping degree in the fourth preset range.
- 11 . The control method according to claim 8 , wherein in step of acquiring the first training data set, in response to no target object being present when performing the target object recognition on the initial sample image, the control method further comprises: labeling the initial sample image with a negative sample label, and taking the initial sample image labeled with the negative sample label as a training image in the first training data set.
- 12 . The control method according to claim 8 , wherein after performing the target object recognition on the initial sample image; and determining a first reference frame containing the target object in response to the target object being present, the control method further comprises: determining a target central point in the initial sample image according to a size of the initial sample image and the second preset central point range, and determining a fifth reference frame by taking the target central point as a center; determining an overlapping degree of the fifth reference frame and any first reference frame in the initial sample image according to position information of the fifth reference frame and the position information of the first reference frame; and labeling a partial sample image of the initial sample image in the fifth reference frame as a negative sample label in response that the overlapping degree of the fifth reference frame and any first reference frame in the initial sample image is smaller than or equal to a second preset threshold, and using the partial sample image labeled with the negative sample label as the training image in the first training data set.
- 13 . The control method according to claim 1 , wherein acquiring the image to be detected, comprises: acquiring a plurality of continuous video frames shot by the shooting device; and determining whether a moving target exists in a shooting site of the shooting device by adopting a frame difference method according to the plurality of continuous video frames; and taking the video frame shot by the shooting device as the image to be detected in response to the moving target existing.
- 14 . A control method for a broadcast monitoring system, wherein the broadcast monitoring system comprises a computing and memory system and a terminal; the computing and memory system comprises a processing module, a computing and memory module and an external storage module; and the control method comprises: acquiring, by the processing module, an image to be detected, dividing the image to be detected into a plurality of target sub-images according to a shooting visual angle of a shooting device for shooting the image to be detected, and storing the plurality of target sub-images in the external storage module; reading, by the computing and memory module, the plurality of target sub-images, processing each of the plurality of target sub-images by using a pre-trained target neural network model to obtain a recognition result for the target sub-image, and storing the recognition result in the external storage module; obtaining, by the processing module, a detection result of whether a target object exists in the image to be detected or not based on the recognition result corresponding to each target sub-image, and sending the detection result to the terminal; and determining, by the terminal, a display state based on the detection result.
- 15 . The control method according to claim 14 , wherein the terminal comprises a display module and a main control module; determining, by the terminal, the display state based on the detection result, comprises: sending, by the main control module, an awakening request to the display module and resetting a sleep timer in response the detection result indicates that the target object exists in the image to be detected, to receive the detection result sent from the processing module in response to a terminal request; and sending, by the main control module, a sleep request to the display module to control the main control module to enter a sleep state in response that the detection result indicates that the target object does not exist in the image to be detected and a difference between a current system time and a latest awakening time of the display module is longer than a preset duration; and determining, by the display module, that the display state is normally displaying a picture shot by the shooting device in response to the awakening request; and determining, by the display module, that the display state is sleep in response to the sleep request.
- 16 . A computing and memory system, comprising a processing module, a computing and memory module and an external storage module; wherein the processing module is configured to acquire an image to be detected; divide the image to be detected into a plurality of target sub-images according to a shooting visual angle of the shooting device for shooting the image to be detected, and store the plurality of target sub-images in the external storage module; the computing and memory module is configured to read the plurality of target sub-images; process each of the plurality of target sub-images by using a pre-trained target neural network model to obtain a recognition result for the target sub-image, and store the recognition result in the external storage module; and the processing module is configured to obtain a detection result of whether a target object exists in the image to be detected or not based on the recognition result corresponding to each target sub-image, and send the detection result to a terminal so that the terminal determines a display state based on the detection result.
- 17 . (canceled)
- 18 . A non-transitory computer readable storage medium, wherein the non-transitory computer readable storage medium has a computer program stored thereon, and the computer program, when being executed by a processor, causes the processor to perform the control method according to claim 1 .
Description
FIELD OF THE DISCLOSURE The present disclosure relates to the field of image recognition and terminal broadcast (playback) monitoring technology, in particular to a control method for a broadcast monitoring system, a control apparatus for a broadcast monitoring system and a computing and memory system. BACKGROUND OF THE DISCLOSURE With the rapid development of information technology, a modern electronic device has been also rapidly developing towards the intellectualization, the light weight and the portability. For an intelligent terminal, a display screen displays a monitoring picture for a long time, so that the power consumption of the system is large, and the service life of the system is adversely influenced. Therefore, an urgent technical problem to be solved in the broadcast monitoring field is to realize the operation with the low power consumption. SUMMARY OF THE DISCLOSURE The present disclosure is to solve at least one of the technical problems in the prior art, and provides a control method for a broadcast monitoring system, a control apparatus for a broadcast monitoring system and a computing and memory system. In a first aspect, the technical solution adopted for solving the technical problems of the present disclosure is a control method for a broadcast monitoring system, which includes: acquiring an image to be detected; dividing the image to be detected into a plurality of target sub-images according to a shooting visual angle of a shooting device for shooting the image to be detected; processing each of the plurality of target sub-images by using a pre-trained target neural network model to obtain a recognition result for the target sub-image; and obtaining a detection result of whether a target object exists in the image to be detected or not based on the recognition result corresponding to each target sub-image, and sending the detection result to a terminal, so that the terminal determines a display state at least based on the detection result. In some embodiment, dividing the image to be detected into the plurality of target sub-images according to the shooting visual angle of the shooting device, includes: dividing the image to be detected into a first sub-region, a second sub-region and a third sub-region arranged sequentially along a first direction, in response to the shooting visual angle being within a preset visual angle range, wherein the first sub-region and the third sub-region partially overlap with the second sub-region, a width of the first sub-region in the first direction is equal to a width of the third sub-region in the first direction, and a width of the second sub-region in the first direction is greater than the width of the first sub-region in the first direction, and the first direction is a height direction of the image to be detected; dividing a part of the image to be detected in the first sub-region into a plurality of first sub-images arranged side by side along a second direction, wherein widths of at least part of the first sub-images in the second direction are the same, and the second direction is a width direction of the image to be detected; dividing a part of the image to be detected in the second sub-region into a plurality of second sub-images arranged side by side along the second direction, wherein widths of at least part of the second sub-images in the second direction are the same; and dividing a part of the image to be detected in the third sub-region into a plurality of third sub-images arranged side by side along the second direction, wherein widths of at least part of the third sub-images in the second direction are the same, and each of the plurality of target sub-images includes a first sub-image, a second sub-image, and a third sub-image; wherein a width of the third sub-image in the second direction is larger than a width of the second sub-image in the second direction, which is larger than the width of the first sub-image in the second direction. In some embodiment, the adjacent first sub-images at least partially overlap with each other in the second direction; the adjacent second sub-images at least partially overlap with each other in the second direction; and the third sub-images do not overlap with each other in the second direction. In some embodiment, a ratio of the width of the second sub-region in the first direction to the width of the first sub-region in the first direction is 2:1; and a ratio of the widths of the third sub-image and the second sub-image in the second direction is 3:2, and a ratio of the widths of the second sub-image and the first sub-image in the second direction is 4:3. In some embodiment, in the plurality of first sub-images arranged side by side in the second direction, the remaining first sub-images, except for a 1st first sub-image and a last first sub-image arranged side by side in the second direction, have a same width in the second direction; the 1st first sub-image and the last first sub-image hav