Search

CN-122002078-A - Video playing method, device, equipment and medium of embedded game engine

CN122002078ACN 122002078 ACN122002078 ACN 122002078ACN-122002078-A

Abstract

The invention relates to the technical field of video rendering and discloses a video playing method, a device, equipment and a medium of an embedded game engine, wherein the method comprises the steps of opening up a first space in a CPU (central processing unit) memory based on a DMA-BUF (direct memory access-BUF) method and generating a file descriptor for the first space; the method comprises the steps of assigning a file descriptor to a GPU, enabling a CPU to call a hard decoding unit through a customized FFmpeg component to decode an obtained coded video stream, writing decoded picture data into a first space, enabling the customized FFmpeg component to be generated by compiling an embedded environment based on a cross compiling component, enabling the GPU to create EGLImage objects through the file descriptor, enabling the GPU to bind EGLImage objects with OpenGL textures, and enabling the GPU to play pictures on a game engine through the OpenGL textures. The invention solves the problems of incompatibility and more redundant operation of the game engine in playing video in the embedded equipment.

Inventors

  • WU YAN
  • LIU ZHAOMING
  • DENG LILIANG
  • WAN HONG

Assignees

  • 重庆长安汽车股份有限公司

Dates

Publication Date
20260508
Application Date
20260126

Claims (10)

  1. 1. A video playing method of an embedded game engine, the method comprising: Opening up a first space in a CPU memory based on a DMA-BUF method, and generating a file descriptor for the first space; assigning the file descriptor to the GPU; The CPU calls a hard decoding unit through a customized FFmpeg component, decodes the obtained coded video stream, and writes decoded picture data into the first space, wherein the customized FFmpeg component is generated for embedded environment compiling based on a cross compiling component; the GPU creates EGLImage objects by using the file descriptors, wherein the EGLImage objects are used for accessing decoded picture data in the first space according to the file descriptors; the GPU binds the EGLImage objects with OpenGL textures; And the GPU performs picture playing on the game engine by using the OpenGL textures.
  2. 2. The method of claim 1, wherein generating the custom FFmpeg component based on cross-compilation component compilation comprises: installing a cross-compilation component for an embedded operating system; executing a first compiling command by the cross-compiling component to remove redundant modules from the standard FFmpeg component; And executing a second compiling command through the cross compiling component to enable the decoding module and the hardware acceleration module for calling the corresponding hardware function of the GPU.
  3. 3. The method of claim 1, wherein the GPU binding the EGLImage objects with OpenGL textures comprises: Creating a first blank texture for storing Y component in YUV format data based on the full resolution of the decoded picture data; Creating a second blank texture for storing a U component and a third blank texture for storing a V component in YUV format data, respectively, based on the half resolution of the decoded picture data; And controlling the first blank texture, the second blank texture and the third blank texture to refer to the decoded picture data to the first space through the EGLImage object respectively, so as to complete OpenGL texture binding.
  4. 4. The method of claim 3, wherein the GPU utilizes the OpenGL texture to play a picture on a game engine, comprising: Converting the YUV format data of the OpenGL texture into RGB format data; color matching is carried out on the RGB format data; scaling the toned RGB format data according to an adjustment proportion to obtain rendering data, wherein the adjustment proportion is a proportion between screen resolution and resolution of decoded picture data; acquiring canvas ID of a game engine; mapping the rendering data into a canvas of the game engine based on the canvas ID; and displaying the content of the canvas through the game engine.
  5. 5. The method of claim 4, wherein the toning the RGB format data comprises: Acquiring a preset RGB three-channel color-mixing table; And inquiring the color matching value corresponding to each pixel of the RGB format data from the RGB three-channel color matching table, and performing color space conversion on the RGB format data by utilizing the inquired color matching value.
  6. 6. The method of claim 4, further comprising, prior to the mapping the rendering data into the canvas of the game engine based on the canvas ID: Extracting a first time stamp of the decoded picture data and a second time stamp of the audio data respectively through two independent threads; Setting a dynamic synchronization threshold; When the absolute value of the difference value between the first time stamp and the second time stamp is larger than the dynamic synchronization threshold value, triggering a frame discarding mechanism or a frame repeating mechanism for the decoded picture data to be played so as to enable the first time stamp and the second time stamp to be consistent.
  7. 7. The method of claim 4, wherein the mapping the rendering data into the canvas of the game engine based on the canvas ID comprises: Storing rendering data of the current frame into a background buffer area; Judging whether a vertical synchronous signal for finishing the scanning of the previous frame of picture of the display is received or not, wherein the rendering data of the previous frame of picture exists in a foreground buffer area; when the vertical synchronous signal is received, exchanging the background buffer area and the foreground buffer area so as to enable rendering data of the current frame to be stored in the foreground buffer area; rendering data for a current frame is mapped into a canvas of the game engine based on the canvas ID.
  8. 8. A video playback device for an embedded game engine, the device comprising: The shared video memory module is used for opening up a first space in the CPU memory based on a DMA-BUF method and generating a file descriptor for the first space; The index distribution module is used for endowing the GPU with the file descriptor; The data input module is used for enabling the CPU to call the hard decoding unit through the customized FFmpeg component, decoding the obtained coded video stream, writing decoded picture data into the first space, and compiling and generating the customized FFmpeg component for the embedded environment based on the cross compiling component; A reference object creation module, configured to create EGLImage an object by using the file descriptor by using the GPU, where the EGLImage object is configured to access decoded picture data in the first space according to the file descriptor; A texture binding module, configured to bind the EGLImage objects to OpenGL textures by using a GPU; and the rendering and playing module is used for playing the picture on the game engine by the GPU through the OpenGL textures.
  9. 9. An electronic device, comprising: A memory and a processor in communication with each other, the memory having stored therein computer instructions which, upon execution, cause the processor to perform the method of any of claims 1 to 7.
  10. 10. A computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1 to 7.

Description

Video playing method, device, equipment and medium of embedded game engine Technical Field The invention relates to the technical field of video rendering, in particular to a video playing method, device, equipment and medium of an embedded game engine. Background In the video playing application scene of an embedded game engine (such as a Cocos), especially devices based on a Linux/Yocto system such as a car instrument and an embedded game machine, the existing video playing scheme has a plurality of technical defects to be solved urgently, and application experience and scene suitability are severely restricted. Firstly, the problem of insufficient platform compatibility is solved, the video playing component (such as CocosVideoPlayer) provided by the current game engine official is only developed and adapted for main stream platforms such as Web, iOS, android, and the operating systems of core embedded scenes such as vehicle factory meters are Linux, so that the devices cannot effectively integrate the video playing function with the game engine, and the video playing requirement under the embedded scenes cannot be met. In addition, the method has obvious performance bottleneck, the resource waste phenomenon is prominent, and related technologies have multiple redundancy operations even if video processing flows are carried out on main stream platforms such as Web, iOS, android and the like, in particular to the aspect of video memory management. In the related art, each time a frame of picture is played, a video memory is required to be applied to a GPU again through a glTexImage D function, then a CPU copies picture source data from a CPU memory to the GPU video memory, the GPU performs decoding and video format conversion, glDeleteTextures functions are required to be called to release the video memory after the playing is finished, frequent application and release operations lead to a large increase of fragments of the GPU memory, a copying process transmitted from the CPU memory to the GPU video memory also consumes a large amount of time and bandwidth resources, multiple redundancy operations jointly lead to reduction of the overall operation efficiency of the equipment, and the picture playing effect is poor. Disclosure of Invention The invention provides a video playing method, device, equipment and medium of an embedded game engine, which are used for solving the problems that the video playing of the game engine in the embedded equipment is incompatible and the redundant operation of a playing mechanism is more. In a first aspect, the invention provides a video playing method of an embedded game engine, which comprises the steps of opening up a first space in a CPU memory based on a DMA-BUF method and generating a file descriptor for the first space, giving the file descriptor to a GPU, enabling the CPU to decode an obtained coded video stream by calling a hard decoding unit through a customized FFmpeg component and writing decoded picture data into the first space, enabling the customized FFmpeg component to compile and generate for the embedded environment based on a cross compiling component, enabling the GPU to create EGLImage objects by using the file descriptor, enabling EGLImage objects to be used for accessing the decoded picture data in the first space according to the file descriptor, enabling the GPU to bind EGLImage objects with OpenGL textures, and enabling the GPU to play pictures on the game engine by using the OpenGL textures. According to the technical means, a shared space is opened up in the CPU memory and a file descriptor is generated by the DMA-BUF technology, so that an efficient data path of the CPU and the GPU is constructed, redundant copying between the CPU memory and the GPU video memory is avoided, and data transmission delay is remarkably reduced. The hard decoding unit is called by the customized FFmpeg component to replace CPU soft decoding, so that the CPU occupancy rate is greatly reduced, the low-power consumption constraint of the embedded equipment is adapted, meanwhile, the customized component is adapted to the embedded environment through cross compiling, and the problem of poor compatibility of the standard component is solved. And through EGLImage binding the object with the OpenGL texture, the direct access of the decoded data is realized, a decoding-rendering zero copy link is opened, and the data access efficiency is improved. Finally, the picture playing is finished on the game engine, the compatibility pain point of the game engine video playing under the embedded Linux system is solved, the smooth output of the high-resolution video is ensured, the low resource consumption and the high playing stability are considered, and the core requirements of scenes such as vehicle instrument, embedded game machine and the like are met. In some alternative embodiments, the step of generating the custom FFmpeg component based on cross-compilation component com