CN-121982138-A - Method, apparatus, electronic device and computer program product for generating an image
Abstract
Embodiments of the present disclosure relate to methods, apparatuses, electronic devices, and computer program products for generating images. The method includes generating a target mesh model for a first point cloud and a first image corresponding to the first point cloud. The method further includes obtaining adjusted environmental information by adjusting the environment in the target mesh model. The method also includes generating a second image based on the first image and the adjusted environmental information. According to the method disclosed by the invention, the generated image has the advantages of high accuracy of the image acquired in the point cloud and the capability of adjusting the environment, so that accurate and various images can be generated in the three-dimensional virtual world.
Inventors
- ZHANG XIAOFENG
Assignees
- 罗伯特·博世有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20241030
Claims (13)
- 1. A method (200) for generating an image, comprising: Generating (202) a target mesh model for a first point cloud and a first image corresponding to the first point cloud; Obtaining (204) adjusted environmental information by adjusting an environment in the target mesh model, and A second image is generated (206) based on the first image and the adjusted environmental information.
- 2. The method of claim 1, wherein generating (202) a target mesh model for a first point cloud comprises: Acquiring a plurality of images using a first sensor; Collecting an initial point cloud using a second sensor; generating the first point cloud from the plurality of images, the position of the first sensor, the position of the second sensor, and the initial point cloud, and The target mesh model is generated from the plurality of images, the location of the first sensor, the location of the second sensor, and the initial point cloud.
- 3. The method of claim 2, wherein generating the first point cloud from the plurality of images, the location of the first sensor, the location of the second sensor, and the initial point cloud comprises: Determining Gao Sidian a cloud according to the Gaussian distribution parameters and the initial point cloud; Iteratively optimizing the Gao Sidian cloud using the plurality of images, and The Gao Sidian cloud that is optimized is determined as the first point cloud.
- 4. The method of claim 3, wherein iteratively optimizing the Gao Sidian cloud using the plurality of images comprises: Determining a two-dimensional projection of the Gao Sidian cloud from the location of the first sensor; performing micro-rasterization processing on the two-dimensional projection to obtain an intermediate image; determining an image loss of the intermediate image based on the plurality of images; Adjusting the Gaussian density and the Gaussian distribution parameter according to the image loss in response to the image loss being greater than a threshold, and And updating the Gao Sidian cloud according to the Gaussian density and the Gaussian distribution parameters.
- 5. The method of claim 1, wherein obtaining (204) adjusted environmental information by adjusting an environment in the target mesh model comprises: importing the target grid model into a rendering engine; Rendering the target mesh model in the rendering engine to obtain a rendered mesh model, and The adjusted environmental information is extracted from the rendered mesh model.
- 6. The method of claim 5, wherein the format of the adjusted environmental information is an image format, and generating (206) a second image based on the first image and the adjusted environmental information comprises: assigning a first weight to each pixel in the first image; Assigning a second weight to each pixel in the adjusted environmental information, and And fusing the first image and the adjusted environment information according to the first weight and the second weight to generate the second image.
- 7. The method of claim 1, further comprising: Rendering the first point cloud to obtain a first three-dimensional virtual world; Setting a simulation host in the first three-dimensional virtual world; And generating (202) a first image corresponding to the first point cloud comprises: setting an image acquisition module for the simulated bicycle by using a rendering engine; determining an acquisition position according to the position of the simulation self-vehicle; Determining an acquisition viewing angle according to the viewing angle of the image acquisition module, and And generating the first image based on the first three-dimensional virtual world according to the acquisition position and the acquisition view angle.
- 8. The method of claim 7, wherein the first point cloud comprises a plurality of three-dimensional objects, a first three-dimensional object of the plurality of three-dimensional objects being acquired into the first image, and the method further comprises: extracting a first association relation between the simulated bicycle and the first three-dimensional object according to the first point cloud, and And generating a label of the simulated self-vehicle relative to the first three-dimensional object according to the first association relation.
- 9. A method for driving a vehicle, comprising: Acquiring (440) a target image, wherein the target image is generated from an initial image and environment information, the initial image being generated based on a first point cloud, the environment information being obtained by adjusting an environment of a target mesh model for the first point cloud; determining driving parameters of the simulated host vehicle from the target image using (442) a planning network, and The vehicle is driven (444) by a driving system of the vehicle according to the driving parameters.
- 10. A method for training a planning network, comprising: acquiring (560) a target image, wherein the target image is generated from an initial image and environmental information, the initial image being generated based on a first point cloud, the environmental information being obtained by adjusting an environment of a target mesh model for the first point cloud, and The planning network is trained (562) using training samples comprising the target image.
- 11. An apparatus (600) for generating an image, comprising: A first generation unit (602) configured to generate a target mesh model for a first point cloud and a first image corresponding to the first point cloud; an information adjustment unit (604) configured to obtain adjusted environment information by adjusting an environment in the target mesh model, and A second generation unit (606) configured to generate a second image based on the first image and the adjusted environmental information.
- 12. An electronic device, comprising: at least one processor, and Coupled to the at least one processor and having instructions stored thereon, which when executed by the at least one processor, cause the electronic device to perform the method of any of claims 1-10.
- 13. A computer program product tangibly stored on a non-transitory computer readable medium and comprising machine executable instructions that, when executed, cause a machine to perform the method of any one of claims 1 to 10.
Description
Method, apparatus, electronic device and computer program product for generating an image Technical Field The present disclosure relates to the field of image processing, and more particularly, to a method, apparatus, electronic device and computer program product for generating an image. Background The field of autopilot is now rapidly evolving, and autopilot systems can identify and classify various objects on roads, such as other vehicles, pedestrians, road signs and traffic lights. They also predict the behavior of these objects, for example, to determine if a pedestrian will suddenly cross a road or if a vehicle in front is ready to change lanes. Furthermore, the autopilot system needs to understand complex driving situations such as making a right turn decision at an intersection or keeping lanes stable on a highway. The autopilot system perceives the external environment through various data, such as through images. In order for an autonomous car to be able to run safely under a variety of conditions, an autonomous system needs to be able to cope with images covering different weather, road conditions and traffic patterns. It can be said to a certain extent that the performance of an autopilot system depends on the degree of diversity of the images provided. Disclosure of Invention Embodiments of the present disclosure propose a method, apparatus, electronic device and computer program product for generating an image. In a first aspect of the present disclosure, a method for generating an image is provided. The method includes generating a target mesh model for a first point cloud and a first image corresponding to the first point cloud. The method further includes obtaining adjusted environmental information by adjusting the environment in the target mesh model. The method also includes generating a second image based on the first image and the adjusted environmental information. In a second aspect of the present disclosure, a method for driving a vehicle is provided. The method includes obtaining a target image, wherein the target image is generated from an initial image and environmental information, the initial image being generated based on a first point cloud, the environmental information being derived by adjusting an environment of a target mesh model for the first point cloud. The method further includes determining driving parameters of the simulated host vehicle using the planning network from the second image. The method further includes driving the vehicle by a driving system of the vehicle according to the driving parameters. In a third aspect of the present disclosure, a method for training a planning network is provided, the method comprising obtaining a target image, wherein the target image is generated from an initial image and environment information, the initial image being generated based on a first point cloud, the environment information being obtained by adjusting an environment of a target mesh model for the first point cloud. The method also includes training the planning network using a training sample that includes the target image. In a fourth aspect of the present disclosure, an apparatus is provided. The apparatus includes a first generation unit configured to generate a target mesh model for a first point cloud and a first image corresponding to the first point cloud. The apparatus further comprises an information adjustment unit configured to obtain adjusted environmental information by adjusting the environment in the target mesh model. The apparatus further includes a second generation unit configured to generate a second image based on the first image and the adjusted environmental information. In a fifth aspect of the present disclosure, there is provided an electronic device comprising at least one processor, and instructions coupled to the at least one processor and having stored thereon, which instructions, when executed by the at least one processor, cause the electronic device to perform the method according to the first to third aspects of the present disclosure. In a sixth aspect of the present disclosure, there is provided a computer program product tangibly stored on a non-transitory computer-readable medium and comprising machine executable instructions that, when executed, cause a machine to perform the method according to the first to third aspects of the present disclosure. In a seventh aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon machine executable instructions, wherein the machine executable instructions are executed by a processor to implement the method according to the first to third aspects of the present disclosure. It should be understood that what is described in this summary is not intended to limit the critical or essential features of the embodiments of the disclosure nor to limit the scope of the disclosure. Other features of the present disclosure will become apparent from