CN-121985079-A - Camera synchronous processing method and device, electronic equipment and storage medium
Abstract
The embodiment of the application provides a camera synchronous processing method, a device, electronic equipment and a storage medium, wherein the method comprises the steps of triggering at least two image sensors to synchronously acquire an original frame based on a synchronous pulse signal generated by a main control module; the method comprises the steps of storing original frames acquired by each image sensor into corresponding independent buffer areas, wherein each independent buffer area comprises a plurality of memory blocks, the memory blocks are dynamically managed through a linked list structure, acquiring the original frames from the corresponding independent buffer areas of each image sensor based on the same time stamp for synthesis, and obtaining synthesized frames, wherein a plurality of continuous synthesized frames form synthesized video. And the time alignment of the physical layer is used as a starting point, and the synchronization precision is improved through a hardware-level time synchronization mechanism. And the memory blocks are managed by combining the independent buffer area and the shared memory mapping area and the linked list structure, so that the number of times of full-frame copy transmission is reduced, and zero-copy data transmission is realized by directly accessing the shared memory through the graphic processor.
Inventors
- TANG JINXING
Assignees
- 四川易景智能终端有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20260402
Claims (10)
- 1. A camera synchronization processing method, the method comprising: Triggering at least two image sensors to synchronously acquire an original frame based on a synchronous pulse signal generated by a main control module; storing the original frames acquired by each image sensor into corresponding independent buffer areas, wherein each independent buffer area comprises a plurality of memory blocks, and the memory blocks are dynamically managed through a linked list structure; and acquiring original frames from the corresponding independent buffer areas of each image sensor based on the same time stamp for synthesis to obtain synthesized frames, wherein a plurality of continuous synthesized frames form synthesized video.
- 2. The method of claim 1, wherein triggering the at least two image sensors to synchronously acquire the original frames based on the synchronization pulse signal generated by the master control module comprises: generating a synchronous pulse signal with a fixed pulse width by a synchronous pulse generator of a main control module; Transmitting the pulse signals with fixed pulse width to clock calibration circuits of the at least two image sensors to correct local clock offset of each image sensor; And triggering the at least two image sensors to acquire the original frames according to the corrected local clock.
- 3. The method of claim 1, wherein storing the raw frames acquired by each image sensor into a respective independent buffer comprises: Determining an output frame rate and a data amount of each image sensor; According to the output frame rate and the data volume of each image sensor, the capacity of an independent buffer area corresponding to each image sensor is adjusted; and storing the original frames acquired by each image sensor into an independent buffer area with adjusted capacity.
- 4. The method according to claim 1, wherein the obtaining the original frames from the respective independent buffers of each image sensor for synthesis based on the same time stamp, to obtain the synthesized frames, includes: directly accessing frame data in an independent buffer area corresponding to each image sensor through a shared memory mapping area of a graphic processor, and acquiring at least two original frames with the same time stamp; and according to a preset mixed weight matrix of the graphic processor, performing pixel-level fusion on at least two original frames with the same time stamp to obtain a composite frame.
- 5. The method according to claim 1, wherein the method further comprises: acquiring the filling rate of the independent buffer areas corresponding to the at least two image sensors in real time; And adjusting the resolution and/or the code rate of the at least two image sensors under the condition that the filling rate is smaller than a preset threshold value.
- 6. The method of claim 5, wherein the at least two image sensors comprise a primary image sensor and a secondary image sensor, and wherein the adjusting the resolution and/or code rate of the at least two image sensors comprises: Identifying a picture scene type according to the synthesized video; In the case where a picture scene type is a static scene, decreasing the resolution of the sub-image sensor and increasing the compression ratio of the synthesized video, the picture scene type being determined based on historical frames; and under the condition that the picture scene type is a dynamic scene, automatically balancing code rate allocation corresponding to the at least two image sensors, and transmitting the synthesized video based on the code rate allocation.
- 7. The method according to any one of claims 1-6, further comprising: And under the condition that the recording mode selected by the user is detected, starting a multi-stage temperature control unit, wherein the multi-stage temperature control unit is used for dynamically adjusting the working frequency of the main control module and/or the graphic processor according to the feedback of the temperature sensor.
- 8. A camera synchronization processing apparatus, the apparatus comprising: The synchronous control module is used for triggering at least two image sensors to synchronously acquire an original frame based on the synchronous pulse signals generated by the main control module; The storage module is used for storing the original frames acquired by each image sensor into corresponding independent buffer areas, wherein each independent buffer area comprises a plurality of memory blocks, and the memory blocks are dynamically managed through a linked list structure; And the synthesis module is used for obtaining the original frames from the corresponding independent buffer areas of each image sensor based on the same time stamp to synthesize, so as to obtain synthesized frames, and a plurality of continuous synthesized frames form a synthesized video.
- 9. An electronic device comprising a processor and a memory communicatively coupled to the processor; the memory stores computer-executable instructions; the processor executes computer-executable instructions stored in the memory to implement the method of any one of claims 1 to 7.
- 10. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of claims 1 to 7.
Description
Camera synchronous processing method and device, electronic equipment and storage medium Technical Field The present application relates to the field of image acquisition and processing, and in particular, to a method and apparatus for synchronously processing a camera, an electronic device, and a storage medium. Background In a mobile device multi-camera system, a double-shot synchronous video acquisition and synthesis technology becomes one of the core capabilities of an intelligent terminal, and is widely applied to scenes such as 4K/8K video recording, multi-view live broadcasting, AR/VR real-time picture fusion, industrial detection and the like. In the prior art, a timestamp of camera frame data is obtained through an application programming interface, and frame alignment is performed based on system clock interruption. The scheme depends on the scheduling of an operating system, is greatly interfered by background tasks, and has a time sequence error of 10-50ms, so that the problem of picture dislocation is obvious. In addition, the video synthesis link mostly adopts fixed resolution coding and full-frame copy transmission, so that the memory bandwidth occupancy rate is high, the resource allocation strategy is static, the dynamic scene change can not be adapted, and the power consumption surge and the performance fluctuation are easily caused. Disclosure of Invention The embodiment of the application provides a camera synchronous processing method, a camera synchronous processing device, electronic equipment and a storage medium, which are used for improving synchronous precision, improving synthesis efficiency and solving the problem of resource allocation stiffness. In a first aspect, an embodiment of the present application provides a method for synchronously processing cameras, where the method includes: Triggering at least two image sensors to synchronously acquire an original frame based on a synchronous pulse signal generated by a main control module; Storing the original frames acquired by each image sensor into corresponding independent buffer areas, wherein each independent buffer area comprises a plurality of memory blocks, and the memory blocks are dynamically managed through a linked list structure; and acquiring original frames from the corresponding independent buffer areas of each image sensor based on the same time stamp for synthesis to obtain synthesized frames, wherein a plurality of continuous synthesized frames form synthesized video. In one possible implementation manner, based on the synchronization pulse signal generated by the main control module, triggering at least two image sensors to synchronously acquire the original frames includes: generating a synchronous pulse signal with a fixed pulse width by a synchronous pulse generator of a main control module; transmitting pulse signals with fixed pulse width to clock calibration circuits of at least two image sensors so as to correct local clock offset of each image sensor; and triggering at least two image sensors to acquire the original frames according to the corrected local clock. In one possible implementation, storing the raw frames acquired by each image sensor into a respective independent buffer, includes: Determining an output frame rate and a data amount of each image sensor; According to the output frame rate and the data volume of each image sensor, the capacity of an independent buffer area corresponding to each image sensor is adjusted; and storing the original frames acquired by each image sensor into an independent buffer area with adjusted capacity. In one possible implementation, the method for obtaining the original frames from the corresponding independent buffers of each image sensor based on the same time stamp for synthesis to obtain the synthesized frames includes: directly accessing frame data in an independent buffer area corresponding to each image sensor through a shared memory mapping area of a graphic processor, and acquiring at least two original frames with the same time stamp; And according to a preset mixed weight matrix of the graphic processor, performing pixel-level fusion on at least two original frames with the same time stamp to obtain a composite frame. In one possible embodiment, the method further comprises: acquiring the filling rate of the corresponding independent buffer areas of at least two image sensors in real time; And adjusting the resolution and/or the code rate of at least two image sensors under the condition that the filling rate is smaller than a preset threshold value. In one possible embodiment, the at least two image sensors include a primary image sensor and a secondary image sensor, and adjusting the resolution and/or code rate of the at least two image sensors includes: Identifying a picture scene type according to the synthesized video; Under the condition that the picture scene type is a static scene, the resolution of the auxiliary image sensor is reduced, th