Search

EP-4742172-A1 - OBJECT MAP GENERATION METHOD AND APPARATUS, AND DEVICE, COMPUTER-READABLE STORAGE MEDIUM AND COMPUTER PROGRAM PRODUCT

EP4742172A1EP 4742172 A1EP4742172 A1EP 4742172A1EP-4742172-A1

Abstract

Provided in the present application are an object map generation method and apparatus, and a device, a computer-readable storage medium and a computer program product The method comprises: acquiring a plurality of initial object maps of a virtual object, and adjustment description text used for describing a preset adjustment effect on the virtual object; controlling a virtual camera to photograph the virtual object from a plurality of different photographing angles, so as to obtain captured images respectively corresponding to the photographing angles; on the basis of the adjustment description text, performing image adjustment on each captured image, so as to obtain a reference image corresponding to each captured image, wherein the reference image meets the preset adjustment effect; selecting from among reference images a target reference image corresponding to each initial object map, wherein the virtual camera can capture the corresponding initial object map at the photographing angle corresponding to the target reference image; and on the basis of the target reference image and the initial object map, generating a target object map which meets the preset adjustment effect.

Inventors

  • LUO, Keyang

Assignees

  • Tencent Technology (Shenzhen) Company Limited

Dates

Publication Date
20260513
Application Date
20240730

Claims (20)

  1. An object map generation method, comprising: obtaining a plurality of initial object maps of a virtual object and an adjustment description text indicating a preset adjustment effect for the virtual object, the plurality of initial object maps forming a surface map of the virtual object; controlling a virtual camera to capture the virtual object at a plurality of capturing angles, to obtain captured images respectively corresponding to the capturing angles; respectively performing image adjustment on the captured images based on the adjustment description text, to obtain reference images corresponding to the preset adjustment effect; selecting, from the reference images, target reference images respectively corresponding to the initial object maps; and generating, based on the target reference images and the initial object maps, target object maps satisfying the preset adjustment effect.
  2. The method according to claim 1, wherein the controlling a virtual camera to capture the virtual object at a plurality of capturing angles, to obtain captured images respectively corresponding to the capturing angles comprises: obtaining a central position of a geometric center of the virtual object in a virtual scene; performing standardization processing on the virtual object in the virtual scene based on the central position, to obtain a target virtual object; and controlling the virtual camera to capture the target virtual object at the plurality of different capturing angles, to obtain the captured images.
  3. The method according to claim 1 or 2, wherein the controlling the virtual camera to capture the target virtual object at the plurality of capturing angles, to obtain the captured images respectively corresponding to the capturing angles comprises: determining, in the virtual scene, a plurality of capturing positions that are away from the central position by a target distance, the capturing angles and the capturing positions being in one-to-one correspondence; and for each capturing position, arranging the virtual camera located at the capturing position to face the target virtual object and capture an image of the target virtual object at a corresponding capturing angle.
  4. The method according to claim 1 or 2, wherein the performing standardization processing on the virtual object in the virtual scene based on the central position, to obtain a target virtual object comprises: obtaining positions of a plurality of object parts of the virtual object respectively in the virtual scene; determining distances between the positions of the plurality of object parts and the central position, and determining a maximum distance among the distances as a reference distance; and performing the following processing on each object part of the virtual object, to obtain the target virtual object: dividing the distance corresponding to the object part by the reference distance, to obtain a standard position of the object part in the virtual scene; and adjusting the position of the object part to the standard position.
  5. The method according to claim 1, wherein the respectively performing image adjustment on the captured images based on the adjustment description text, to obtain reference images corresponding to the preset adjustment effect comprises: performing image content adjustment on the captured images based on the adjustment description text using an image adjustment network to obtain candidate images respectively corresponding to the captured images, the candidate images satisfying the preset adjustment effect described by the adjustment description text; and performing super resolution processing on the candidate images to obtain the reference images respectively corresponding to the candidate images.
  6. The method according to claim 5, further comprising: obtaining an initial image adjustment network, and obtaining an adjustment description text sample, a label image, and a captured image sample, the label image satisfying a preset adjustment effect described by the text sample; performing image content adjustment on the captured image sample based on the adjustment description text sample using the initial image adjustment network, to obtain an adjusted image corresponding to the captured image sample; and determining a loss value of the initial image adjustment network based on the adjusted image and the label image, and training the initial image adjustment network based on the loss value, to obtain the image adjustment network.
  7. The method according to claim 5, wherein the image adjustment network comprises an encoding layer and a decoding layer; and the performing image content adjustment on the captured images based on the adjustment description text using the image adjustment network to obtain candidate images respectively corresponding to the captured images comprises: invoking the encoding layer, and performing image content encoding on the captured images based on the adjustment description text, to obtain image features respectively corresponding to the captured images; and invoking the decoding layer, and performing image content adjustment on the captured images based on the image features, to obtain the candidate images respectively corresponding to the captured images.
  8. The method according to claim 1, wherein the selecting, from the reference images, target reference images respectively corresponding to the initial object maps comprises: obtaining reference capturing positions respectively corresponding to the reference images, the reference capturing positions indicating positions of the virtual camera in the virtual scene during capturing of the captured images corresponding to the reference images; and performing the following processing on each initial object map: determining, for each reference image, the reference image as the candidate image of the initial object map based on the reference capturing position corresponding to the reference image and the initial object map when it is determined that the virtual camera is capable of obtaining the initial object map at the capturing angle corresponding to the reference image; and determining the target reference image corresponding to the initial object map from the candidate image of the initial object map.
  9. The method according to claim 8, further comprising: obtaining a texturing position of the initial object map in the virtual scene, and connecting the texturing position to the reference capturing position in the virtual scene, to obtain a virtual detection line; determining, when the virtual detection line does not pass through another initial object map in the virtual scene, that the virtual camera is capable of obtaining the initial object map at the capturing angle corresponding to the reference image; and determining, when the virtual detection line passes through another initial object map in the virtual scene, that the virtual camera is not capable of obtaining the initial object map at the capturing angle corresponding to the reference image.
  10. The method according to claim 8, wherein the determining the target reference image corresponding to the initial object map from the candidate image of the initial object map comprises: determining, when there is one candidate image for the initial object map, the candidate image as the target reference image corresponding to the initial object map; determining a degree of association between the initial object map and each candidate image when there are a plurality of candidate images for the initial object map; and determining a candidate image corresponding to a maximum degree of association as the target reference image corresponding to the initial object map.
  11. The method according to any one of claims 8 to 10, wherein the determining a degree of association between the initial object map and each candidate image comprises: determining an adjacency object map of the initial object map from the plurality of initial object maps of the virtual object, and performing the following processing on each candidate image: determining a first degree of association of the candidate image based on the adjacency object map and the candidate image; determining an imaging region of the initial object map in the candidate image, and determining a second degree of association of the candidate image based on an area of the imaging region, a value of the second degree of association being positively correlated to the area of the imaging region; and summing the first degree of association and the second degree of association, to obtain the degree of association between the initial object map and the candidate image.
  12. The method according to any one of claims 8 to 11, wherein the determining a first degree of association of the candidate image based on the adjacency object map and the candidate image comprises: determining the first degree of association of the candidate image as a first value when the candidate image is a candidate image of the adjacency object map, the first value being a non-zero constant; and determining the first degree of association of the candidate image as a second value when the candidate image is not the candidate image of the adjacency object map, the second value being equal to zero.
  13. The method according to claim 1, wherein the generating, based on the target reference images and the initial object maps, target object maps satisfying the preset adjustment effect comprises: performing the following processing on each initial object map: determining an imaging region of the initial object map in the target reference image from the target reference image corresponding to the initial object map; adjusting image content of the initial object map into image content of the imaging region in the target reference image, to obtain a candidate map corresponding to the initial object map; determining an adjacency object map of the initial object map from the plurality of initial object maps of the virtual object; and performing smoothing processing on the candidate map corresponding to the initial object map based on the candidate map corresponding to the adjacency object map, to obtain the target object map corresponding to the initial object map.
  14. The method according to claim 13, wherein the performing smoothing processing on the candidate map corresponding to the initial object map based on the candidate map corresponding to the adjacency object map, to obtain the target object map corresponding to the initial object map comprises: performing the following processing on each pixel in the candidate map corresponding to the initial object map to obtain the target object map: determining the pixel as a to-be-smoothed pixel when a minimum distance between the pixel and the candidate map corresponding to the adjacency object maps is less than a distance threshold; determining, from pixels of the candidate map corresponding to the adjacency object map, a reference pixel closest to the to-be-smoothed pixel; averaging a color value of the reference pixel and a color value of the to-be-smoothed pixel to obtain an average color value; and adjusting the color value of the to-be-smoothed pixel into the average color value.
  15. The method according to claim 1, wherein after the generating, based on the target reference images and the initial object maps, target object maps satisfying the preset adjustment effect, the method further comprises: respectively replacing the initial object maps on the virtual object with the corresponding target object maps, to obtain a target virtual object satisfying the preset adjustment effect.
  16. The method according to claim 1, wherein the obtaining adjustment description text for describing a preset adjustment effect on the virtual object comprises: generating, in response to an editing operation on the adjustment description text, the adjustment description text for describing the preset adjustment effect on the virtual object.
  17. An object map generation apparatus, comprising: an obtaining module, configured to: obtain a plurality of initial object maps of a virtual object and an adjustment description text indicating a preset adjustment effect for the virtual object, the plurality of initial object maps forming a surface map of the virtual object; a capturing module, configured to control a virtual camera to capture the virtual object at a plurality of capturing angles, to obtain captured images respectively corresponding to the capturing angles; an adjustment module, configured to respectively perform image adjustment on the captured images based on the adjustment description text, to obtain reference images corresponding to the preset adjustment effect; a selection module, configured to select, from the reference images, target reference images respectively corresponding to the initial object maps; and a generation module, configured to generate, based on the target reference images and the initial object maps, target object maps satisfying the preset adjustment effect.
  18. An electronic device, comprising: a memory, configured to store a computer-executable instruction or a computer program; and a processor, configured to implement, when executing the computer-executable instruction or the computer program stored in the memory, the object map generation method according to any one of claims 1 to 16.
  19. A computer-readable storage medium, having a computer-readable instruction stored therein, the computer-readable instruction, when executed by a processor, implementing the object map generation method according to any one of claims 1 to 16.
  20. A computer program product, comprising a computer program or a computer-executable instruction, the computer program or the computer-executable instruction, when executed by a processor, implementing the object map generation method according to any one of claims 1 to 16.

Description

RELATED APPLICATION This application claims priority to Chinese Patent Application No. 2023112437155, filed on September 22, 2023, which is incorporated herein by reference in its entirety. FIELD OF THE TECHNOLOGY This application relates to the field of computer technologies, and in particular, to an object map generation method and apparatus, a device, a computer-readable storage medium, and a computer program product. BACKGROUND OF THE DISCLOSURE In recent years, with the development of computer technology, industries such as games, films and television, and virtual reality have a trend of vigorous development. A virtual object, a virtual light source, and a virtual camera are arranged in a virtual scene. The virtual object is images of various persons and items that may perform interaction in the virtual scene, or movable objects in the virtual scene. The virtual object includes an object skeleton and an object map. The object map is attached to the object skeleton, to form the virtual object in the virtual scene. In the related art, to adjust an appearance of the virtual object, an initial object map of the virtual object usually needs to be manually adjusted to generate an adjusted object map. Since there is a large number of object maps for the virtual object, object map generation efficiency is very low. SUMMARY Embodiments of this application provide an object map generation method and apparatus, an electronic device, a computer-readable storage medium, and a computer program product, which can effectively improve object map generation efficiency. Technical solutions of the embodiments of this application are implemented as follows: An embodiment of this application provides an object map generation method, including: obtaining a plurality of initial object maps of a virtual object and adjustment description text for describing a preset adjustment effect on the virtual object, the initial object maps being minimum units of surface maps that constitute the virtual object;controlling a virtual camera to capture the virtual object at a plurality of different capturing angles, to obtain captured images respectively corresponding to the capturing angles;respectively performing image adjustment on the captured images based on the adjustment description text, to obtain reference images satisfying the preset adjustment effect;selecting, from the reference images, target reference images respectively corresponding to the initial object maps, the virtual camera being capable of obtaining the corresponding initial object maps at the capturing angles corresponding to the target reference images; andgenerating, based on the target reference images and the initial object maps, target object maps satisfying the preset adjustment effect. An embodiment of this application provides an object map generation apparatus, including: an obtaining module, configured to: obtain a plurality of initial object maps of a virtual object and adjustment description text for describing a preset adjustment effect on the virtual object, the initial object maps being minimum units of surface maps that constitute the virtual object;a capturing module, configured to obtain a plurality of initial object maps of a virtual object and adjustment description text for describing a preset adjustment effect on the virtual object, the initial object maps being minimum units of surface maps that constitute the virtual object;an adjustment module, configured to respectively perform image adjustment on the captured images based on the adjustment description text, to obtain reference images satisfying the preset adjustment effect;a selection module, configured to select, from the reference images, target reference images respectively corresponding to the initial object maps, the virtual camera being capable of obtaining the corresponding initial object maps at the capturing angles corresponding to the target reference images; anda generation module, configured to generate, based on the target reference images and the initial object maps, target object maps satisfying the preset adjustment effect. In the above solution, the adjustment module is further configured to: invoke an encoding layer and perform image content encoding on the captured images based on the adjustment description text, to obtain image features respectively corresponding to the captured images; and invoke a decoding layer and perform image content adjustment on the captured images based on the image features, to obtain the candidate images respectively corresponding to the captured images. In the above solution, the generation module is further configured to respectively replace the initial object maps on the virtual object with the corresponding target object maps, to obtain a target virtual object, the target virtual object satisfying the preset adjustment effect. In the above solution, the above obtaining module is further configured to generate, in response to an editing opera