EP-4468719-B1 - VIDEO GENERATION METHOD AND APPARATUS BASED ON MULTIPLE VEHICLE-MOUNTED CAMERAS, AND VEHICLE-MOUNTED DEVICE
Inventors
- LUO, Yuanqing
- CHEN, Xianling
- ZHAO, LONG
- TU, Huixun
- XIE, YI
- WANG, Guangfu
- YE, Nianjin
Dates
- Publication Date
- 20260513
- Application Date
- 20230117
Claims (15)
- A method for video generation based on on-board multi-camera, the method comprising: (S201) obtaining video data collected separately by multiple on-board cameras installed in different orientations on a vehicle, wherein each of the on-board cameras is configured to collect video data from a corresponding orientation; (S202) extracting one or more video sequences to be processed from the video data of each orientation by: - (S2021) determining a main camera among the multiple on-board cameras in different orientations by: -- (S211) obtaining map navigation information of the vehicle during a driving process of the vehicle; -- (S212) identifying a key location encountered during the driving process based on the map navigation information; and -- (S213) determining the main camera from the multiple on-board cameras based on a driving direction of the vehicle and the key location, wherein the video data collected by the main camera is the video data captured by the main camera for the key location; - (S2022) extracting one or more main video sequences from the video data collected by the main camera, wherein each of the main video sequences has corresponding time information; and - (S2023) extracting respectively one or more auxiliary video sequences having the time information from the video data collected by other on-board cameras except the main camera, wherein the video sequences comprise the main video sequences and the auxiliary video sequences; (S203) determining display areas of the video data from each orientation in a target video to be generated; and (S204) combining multiple video sequences from multiple orientations based on the display areas to generate the target video.
- The method according to claim 1, wherein said (S2023) extracting respectively the one or more auxiliary video sequences having the time information from the video data collected by the other on-board cameras except the main camera comprises: (S231) determining respectively timestamps of a start video frame and an end video frame for each main video sequence based on the time information to obtain a timestamp sequence; (S232) marking the timestamp sequence in the video data collected by the other on-board cameras except the main camera; and (S233) sequentially determining video sequences to be extracted from the video data marked with the timestamp sequence and extracting the video sequences to be extracted to obtain the one or more auxiliary video sequences having the time information.
- The method according to claim 1 or 2, wherein said (S203) determining the display areas of the video data from each orientation in the target video to be generated comprises: receiving a video template selected for the target video to be generated, wherein the video template comprises multiple template areas, and the template areas respectively have a binding relation to the on-board camera in different orientations on the vehicle; and determining the display areas of the video data from each orientation in the target video to be generated based on binding relations between the on-board cameras in different orientations and the template areas.
- The method according to claim 3, wherein said (S204) combining the multiple video sequences from the multiple orientations based on the display areas to generate the target video comprises: playing, in each of the display areas, one or more video sequences extracted from the video data collected by the on-board camera in a corresponding orientation to generate the target video.
- The method according to any one of claims 1 or 2 or 4, wherein after (S204) combining the multiple video sequences from the multiple orientations based on the display areas to generate the target video, the method further comprises: adding video effects to the target video; wherein the video effects comprise at least one of the following processing: adding background music, changing video style, adding sticker materials, applying filters to the video image, and replacing weather.
- The method according to claim 1, wherein, before (S204) combining the multiple video sequences from the multiple orientations based on the display areas to generate the target video, the method further comprises: identifying and comparing a video clarity of the multiple video sequences; and performing deblurring processing on the video sequences based on an identification and comparison result and in combination with particular vehicle information, to ensure that the video clarity of the multiple video sequences meet preset conditions.
- The method according to claim 6, wherein said identifying and comparing the video clarity of the multiple video sequences comprises: performing clarity comparison on a target video segment and a video of a preset clarity to obtain an identification and comparison result, wherein the target video segment is any video segment of the multiple video sequences, and the identification and comparison result comprises a relationship between the clarity of the target video segment and the preset clarity; determining, if the clarity of the target video segment is less than the preset clarity, a factor affecting the clarity of the target video segment; and correspondingly performing, if the factor affecting the clarity of the target video segment is the particular vehicle information, deblurring processing on the video sequences based on the identification and comparison result and combined with the particular vehicle information to ensure that the video clarity of the multiple video sequences meet the preset conditions.
- The method according to claim 6, wherein said identifying and comparing the video clarity of the multiple video sequences comprises: performing clarity comparison on a target video segment and a video of a preset clarity to obtain a first comparison result, wherein the target video segment is any video segment of the multiple video sequences, and the first comparison result comprises a relationship between the clarity of the target video segment and the preset clarity; selecting multiple target video segments having a same timestamp sequence but in different display areas; and performing comparison on video clarity of any two target video segments of the selected multiple target video segments to obtain a second comparison result, wherein the second comparison result comprises a relationship of the video clarity between the two target video segments, and the identification and comparison result comprises the first comparison result and the second comparison result.
- The method according to claim 7, wherein the particular vehicle information comprises a vehicle speed, vehicle location information, and an interface display size; said performing deblurring processing on the video sequences based on the identification and comparison result and combined with the particular vehicle information to ensure that the video clarity of the multiple video sequences meet the preset conditions comprises: obtaining a video deblurring model based on the vehicle speed of the vehicle when collecting the target video segment, the vehicle location information and the interface display size of a display area corresponding to the target video segment, wherein corresponding relations among different vehicle speeds, different vehicle location information, different interface display sizes, and different video deblurring models are pre-stored; and inputting the identification and comparison result and the particular vehicle information into a pre-trained video deblurring model to perform deblurring processing on the target video segment in the video sequences, to ensure that the video clarity of the multiple video sequences meet the preset conditions.
- The method according to claim 7, wherein the particular vehicle information comprises a vehicle speed, vehicle location information, and an interface display size; said performing deblurring processing on the video sequences based on the identification and comparison result and combined with the particular vehicle information to ensure that the video clarity of the multiple video sequences meet the preset conditions comprises: for each target video segment in the multiple video sequences, performing the following operations: determining a first video deblurring model corresponding to the vehicle speed based on the vehicle speed of the vehicle when collecting the target video segment, and performing deblurring processing on the target video segment using the first video deblurring model to obtain a first video segment, wherein a corresponding relation between different vehicle speeds and different video deblurring models is pre-stored; determining a second video deblurring model corresponding to the vehicle location information based on the vehicle location information when collecting the target video segment, and performing deblurring processing on the target video segment using the second video deblurring model to obtain a second video segment, wherein a corresponding relation between different vehicle location information and different video deblurring models is pre-stored; determining a third video deblurring model corresponding to the interface display size based on the interface display size of a display area corresponding to the target video segment, and performing deblurring processing on the target video segment using the third video deblurring model to obtain a third video segment, wherein a corresponding relation between different interface display sizes and different video deblurring models is pre-stored; and determining a video segment having a highest clarity among the first video segment, the second video segment, and the third video segment, and the video segment having the highest clarity is the target video segment meeting the preset conditions.
- The method according to claim 1, wherein said determining the main camera among the multiple on-board cameras in different orientations comprises: determining the main camera among the multiple on-board cameras based on actual video content captured by each of the on-board cameras.
- The method according to claim 1, wherein said determining the main camera from the multiple on-board cameras based on the driving direction of the vehicle and the key location comprises: designating an on-board camera that can capture the key location as the main camera for the vehicle at this location based on the driving direction of the vehicle and the key location.
- A piece of on-board equipment (1200), comprising a memory (1220), a processor (1210), and a computer program (1221) stored in the memory (1220) and executable by the processor (1210), wherein the processor, (1210) when executing the computer program (1221), is configured to implement the method for video generation based on on-board multi-camera according to any of claims 1 to 12.
- A computer-readable storage medium storing a computer program (1221), wherein the computer program (1221), when executed by a processor (1210), causes processor (1210) to implement the method for video generation based on on-board multi-camera according to any of claims 1 to 12 .
- A computer program product (1221), wherein the computer program product (1221), when running on a piece of on-board equipment (1200), enables the on-board equipment (1200) to execute the method for video generation based on on-board multi-camera according to any of claims 1 to 12.
Description
TECHINICAL FIELD The present invention pertains to the field of intelligent vehicle technology, particularly to a method for video generation based on on-board multi-camera, a piece of on-board equipment, a computer-readable storage medium, and computer program product. BACKGROUND In the field of intelligent vehicles, cameras installed on vehicles enable video capture functionalities, which provides the possibility for the application of one-click video creation technology in this domain. However, the current one-click video creation technologies mainly focus on video data collected by a single camera. Since a vehicle may be equipped with multiple cameras, each capturing different video data, the current one-click video creation technologies are not suitable for vehicular scenarios. US 2014/354816 A1 discloses, for the sake of improving the safety at the time of traveling at dump truck by displaying an image corresponding to the traveling direction of the dump truck, cameras having the field of visions at least rear side, left side and right side of the dump truck, respectively, a monitor disposed in an operator's cab of the dump truck, the operator's cab further comprises a shift lever for operating the forward and rearward traveling direction, a vehicle controller and a display controller is also provided for displaying one or plurality of camera image on the monitor in accordance with the traveling direction of the dump truck, on the basis of steering information of left or right traveling direction of the dump truck and information from the shift lever. TECHNICAL PROBLEM One of the objectives of the present invention is to provide a method for video generation based on on-board multi-camera, and a piece of on-board equipment, a computer-readable storage medium, and computer program product which can process video data collected by multiple on-board cameras to create new videos, and realize the application of one-click video creation technology in vehicular scenarios. TECHNICAL SOLUTION The object is achieved with the features of independent claim 1 regarding the method for video generation based on on-board multi-camera, and with the features of claim 13 regarding the piece of on-board equipment, and with the features of claim 14 regarding the computer-readable storage medium, and with the features of claim 15 regarding the computer program product. Further embodiments are defined in the dependent claims. BENEFICAL EFFECTS The method for video generation based on on-board multi-camera provided by the present invention has the following beneficial effects: the on-board equipment can extract one or more video sequences to be processed from the video data of each orientation by obtaining the video data collected separately by multiple on-board cameras installed in different orientations on a vehicle. After determining display areas of the video data from each orientation in a target video to be generated, the on-board equipment can combine multiple video sequences from various orientations based on the display areas to generate the target video. Based on the positional relationship of the on-board cameras, the embodiments of the present invention can combine video data collected by multiple vehicle cameras into a new target video, which addresses the issue of current one-click video creation technologies being inapplicable to the on-board multi-camera scenarios, and achieves one-click video creation for video data from multiple on-board cameras in vehicular scenarios. Meanwhile, the target video generated based on the different orientations of multiple on-board cameras can offer more perspectives, can form more combination schemes and special effects, and can make the target video obtained by one-click video creation more attractive. DESCRIPTION OF DRAWINGS To illustrate the technical solutions of the present invention more clearly, the following is a brief introduction of the drawings . FIG. 1 is a schematic diagram of a processing flow of a one-click video creation in the existing technologies;FIG. 2 is a schematic diagram of a method for video generation based on on-board multi-camera;FIG. 3 is a schematic diagram of a processing flow of the method for video generation based on on-board multi-camera;FIG. 4 is a schematic diagram illustrating an implementation of step S202 in the method for video generation based on on-board multi-camera;FIG. 5 is a schematic diagram illustrating an implementation of step S2021 in the method for video generation based on on-board multi-camera;FIG. 6 is a schematic diagram illustrating an example of determining a main camera;FIG. 7 is a schematic diagram showing a main video sequence;FIG. 8 is a schematic diagram illustrating an implementation of step S2023 in the method for video generation based on on-board multi-camera;FIG. 9 is a schematic diagram showing video data marked with a timestamp sequence;FIG. 10 is a schematic diagram of a video template;FIG. 11 is a schema