EP-4445972-B1 - SPECIAL EFFECT PROP GENERATION METHOD AND APPARATUS, PICTURE PROCESSING METHOD AND APPARATUS, AND ELECTRONIC DEVICE
Inventors
- DAI, ZENG
Dates
- Publication Date
- 20260513
- Application Date
- 20230109
Claims (11)
- A method for generating an effect, comprising: displaying (S101, S201) an effect editing page and acquiring a first image, wherein the effect editing page is provided with a flow information display region and an effect display region, wherein the flow information display region is used for displaying flow information of a prop to be edited, and the effect display region is used for displaying a second image, wherein the second image is a preview image obtained by processing the first image according to the flow information of the prop to be edited; in response to an editing operation, editing (S102) the flow information of the prop to be edited and updating (S102, S208) the second image displayed in the effect display region; and in response to an effect generation operation, generating (S103, S209) a target effect according to edited target prop flow information; wherein the flow information of the prop to be edited is presented in a form of a flowchart, wherein at least two nodes are provided in the flowchart, and each of the at least two nodes has at least one of: an input connection point or an output connection point; wherein in response to the editing operation, editing the flow information of the prop to be edited comprises: displaying (S202) a node list in response to a node addition operation acting on a first node in the flow information of the prop to be edited, and in response to a triggering operation on a second node in the node list, adding (S203) the second node between the first node and a third node, wherein the third node is an adjacent node of the first node; wherein displaying (S202) the node list in response to the node addition operation acting on the first node in the flow information of the prop to be edited, and in response to the triggering operation on the second node in the node list, adding (S203) the second node between the first node and the third node, comprises: in a case where a list control of the first node is displayed in response to detecting that a cursor moves to a display region of an output connection point of the first node and the node list is displayed in response to a triggering operation acting on the list control of the first node, adding the second node between the first node and a node whose input connection point is connected to the output connection point of the first node in response to the triggering operation on the second node in the node list; and in a case where the list control of the first node is displayed in response to detecting that the cursor moves to a display region of an input connection point of the first node and the node list is displayed in response to the triggering operation acting on the list control of the first node, adding the second node between the first node and a node whose output connection point is connected to the input connection point of the first node in response to the triggering operation on the second node in the node list.
- The method of claim 1, further comprising at least one of: displaying an input image of any input connection point in response to detecting that the cursor moves to a display region of the any input connection point; or displaying an output image of any output connection point in response to detecting that the cursor moves to a display region of the any output connection point.
- The method of claim 1, wherein in response to the editing operation, editing the flow information of the prop to be edited further comprises at least one of: in response to a first sliding operation, controlling (S204) an endpoint of a connection line segment at a target input connection point to move with the first sliding operation until the first sliding operation ends, wherein the first sliding operation starts from the target input connection point; or in response to a second sliding operation, drawing (S204) a new connection line segment with a target output connection point as an initial point and controlling a terminal point of the new connection line segment to move with the second sliding operation until the second sliding operation ends, wherein the second sliding operation starts from the target output connection point; or displaying (S206) configuration information of a fourth node in response to a configuration information display operation for the fourth node in the flow information of the prop to be edited; and displaying (S207) modified configuration information in response to a configuration information modification operation.
- The method of claim 3, wherein, the configuration information display operation comprises a mouse left-click operation for the fourth node; or the configuration information display operation comprises a triggering operation acting on a configuration control of the fourth node, and before the configuration information of the fourth node is displayed in response to the configuration information display operation for the fourth node in the flow information of the prop to be edited, the method further comprises: displaying the configuration control of the fourth node in response to a mouse left-click operation for the fourth node.
- The method of claim 1, wherein after the target effect is generated according to the edited target prop flow information, the method further comprises: uploading (S210) the target effect to a server for release in response to a release operation for the target effect.
- A method for processing an image, comprising: acquiring (S301, S401) an image to be processed and target prop flow information of a target effect, wherein the target prop flow information is obtained by editing flow information of a prop to be edited in an effect editing page in the following manner: adding a second node between a first node and a third node in response to a triggering operation on the second node in a node list, wherein the third node is an adjacent node of the first node, wherein the flow information of the prop to be edited is presented in a form of a flowchart, at least two nodes are provided in the flowchart, and each of the at least two nodes has an input connection point and/or an output connection point, wherein in an editing process, the effect editing page displays a second image obtained by processing a first image according to the flow information of the prop to be edited, and wherein adding the second node between the first node and the third node in response to the triggering operation on the second node in the node list comprises: in a case where a list control of the first node is displayed in response to detecting that a cursor moves to a display region of an output connection point of the first node and the node list is displayed in response to a triggering operation acting on the list control of the first node, adding the second node between the first node and a node whose input connection point is connected to the output connection point of the first node in response to the triggering operation on the second node in the node list, and in a case where the list control of the first node is displayed in response to detecting that the cursor moves to a display region of an input connection point of the first node and the node list is displayed in response to the triggering operation acting on the list control of the first node, adding the second node between the first node and a node whose output connection point is connected to the input connection point of the first node in response to the triggering operation on the second node in the node list; processing (S302) the image to be processed by using the target effect; and in response to the processing being completed and according to an arrangement order of a plurality of nodes in the target prop flow information, sending (S303, S409) a rendering instruction packet to a graphics processing unit (GPU) to instruct the GPU to render an output image of each node and obtain a processed target image, wherein the rendering instruction packet comprises rendering instructions of the plurality of nodes.
- The method of claim 6, wherein processing the image to be processed by using the target effect comprises: controlling (S402) a current node to receive input image information for each node in the target prop flow information; controlling (S407) the current node to process the input image information by using at least one target processing space, so as to obtain at least one piece of output image information, wherein each of the at least one target processing space is an occupied memory space when the current node processes the input image information; and controlling (S408) the current node to output the at least one piece of output image information.
- The method of claim 7, wherein before the current node is controlled to process the input image information by using the target processing space, the method comprises: controlling (S403) the current node to determine a target type of each of the at least one target processing space and a target number of the at least one target processing space; controlling (S404) the current node to determine at least one free processing space that is among existing processing spaces and whose space type matches the target type, wherein the existing processing spaces are processing spaces that are applied for by the plurality of nodes in the target prop flow information; wherein in response to a space number of the at least one free processing space being greater than or equal to the target number, controlling (S405) the current node to acquire a target number of free processing spaces and take the free processing spaces as the at least one target processing space of the current node; and in case where the space number of the at least one free processing space is less than the target number, controlling (S406) the current node to acquire the space number of the at least one free processing space, take the at least one free processing space as a part of the at least one target processing space of the current node, and apply to the GPU for a remaining part of the at least one target processing space according to the target type and a difference between the target number and the space number.
- An electronic device, comprising: at least one processor; and a memory configured to store at least one program; wherein the at least one program is executed by the at least one processor to cause the at least one processor to perform the method for generating an effect of any one of claims 1 to 5 or the method for processing an image of any one of claims 6 to 8.
- A computer-readable storage medium on which a computer program is stored, wherein the computer program is executed by a processor to cause the processor to perform the method for generating an effect of any one of claims 1 to 5 or the method for processing an image of any one of claims 6 to 8.
- A computer program product, wherein the computer program product, when executed by a computer, causes the computer to perform the method for generating an effect of any one of claims 1 to 5 or the method for processing an image of any one of claims 6 to 8.
Description
TECHNICAL FIELD Embodiments of the present disclosure relate to the field of computer technologies and, for example, a method and apparatus for generating an effect, a method and apparatus for processing an image, an electronic device, and a storage medium. BACKGROUND In the related art, R&D personnel need to create a new prop by editing codes, and after the prop is created, the R&D personnel check the effect of the prop by means of trial operation. However, the prop creation process in the related art is cumbersome, and problems existing in the prop cannot be found immediately, resulting in a relatively long time for prop creation. Some prior art is known from CN 110 147 231A and EP 0 528 631A. SUMMARY Embodiments of the present disclosure provide a method and apparatus for generating an effect, a method and apparatus for processing an image, and an electronic device, so as to simplify a prop creation process and shorten the time spent on prop creation. In a first aspect, an embodiment of the present disclosure provides a method for generating an effect. The method includes the steps described below. An effect editing page is displayed and a first image is acquired, where the effect editing page is provided with a flow information display region and an effect display region, where the flow information display region is used for displaying flow information of a prop to be edited, and the effect display region is used for displaying a second image, where the second image is a preview image obtained by processing the first image according to the flow information of the prop to be edited. In response to an editing operation, the flow information of the prop to be edited is edited and the second image displayed in the effect display region is updated. In response to an effect generation operation, a target effect is generated according to edited target prop flow information. In a second aspect, an embodiment of the present disclosure further provides a method for processing an image. The method includes the steps described below. An image to be processed and target prop flow information of a target effect are acquired, where the target prop flow information is obtained by editing flow information of a prop to be edited in an effect editing page, and in an editing process, the effect editing page displays a second image obtained by processing a first image according to the flow information of the prop to be edited. The image to be processed is processed by using the target effect. When the processing is completed and according to an arrangement order of multiple nodes in the target prop flow information, a rendering instruction packet is sent to a graphics processing unit (GPU) to instruct the GPU to render an output image of each node and obtain a processed target image, where the rendering instruction packet includes rendering instructions of the multiple nodes. In a third aspect, an embodiment of the present disclosure provides an apparatus for generating an effect. The apparatus includes a page display module, an editing module, and a prop generation module. The page display module is configured to display an effect editing page and acquire a first image, where the effect editing page is provided with a flow information display region and an effect display region, where the flow information display region is used for displaying flow information of a prop to be edited, and the effect display region is used for displaying a second image, where the second image is a preview image obtained by processing the first image according to the flow information of the prop to be edited. The editing module is configured to, in response to an editing operation, edit the flow information of the prop to be edited and update the second image displayed in the effect display region. The prop generation module is configured to, in response to an effect generation operation, generate a target effect according to edited target prop flow information. In a fourth aspect, an embodiment of the present disclosure further provides an apparatus for processing an image. The apparatus includes an image acquisition module, an image processing module, and an instruction sending module. The image acquisition module is configured to acquire an image to be processed and target prop flow information of a target effect, where the target prop flow information is obtained by editing flow information of a prop to be edited in an effect editing page, and in an editing process, the effect editing page displays a second image obtained by processing a first image according to the flow information of the prop to be edited. The image processing module is configured to process the image to be processed by using the target effect. The instruction sending module is configured to, when the processing is completed and according to an arrangement order of multiple nodes in the target prop flow information, send a rendering instruction packet to a graphics proce