JP-7856364-B2 - Methods, apparatus, devices, and computer programs for extracting features from 3D scenes
Inventors
- 徐 志▲鵬▼
- ▲劉▼ ▲總▼波
- 梁 宇▲寧▼
- 官 冰▲権▼
- ▲楊▼ 益浩
- 申 家忠
- ▲劉▼ 思▲亮▼
- 高 ▲麗▼娜
- ▲楊▼ 敬文
- 王 ▲暁▼曦
- ▲顧▼ 子卉
- 万 ▲楽▼
- 殷 俊
- 欧▲陽▼ 卓能
- 金 鼎健
- 廖 明翔
Assignees
- ▲騰▼▲訊▼科技(深▲セン▼)有限公司
Dates
- Publication Date
- 20260511
- Application Date
- 20231019
- Priority Date
- 20221201
Claims (16)
- A method for extracting features from a 3D scene performed by a computer device, The process involves projecting a set of cone-shaped projections from a target character object onto a 3D scene screen, If the aforementioned cone ray hits the hit object, the step of returning the object attribute information of the hit object, The process involves performing a vector transformation on the object attribute information of each received hit object to obtain the basic trajectory feature vector, The steps include: performing a feature dimensionality reduction process on the aforementioned basic ray feature vector to obtain a ray feature vector; The steps include: collecting elevation value matrices corresponding to each granularity, with the location of the target character object as the collection center, based on different granularities; The steps include: performing a feature dimensionality reduction process on the elevation value matrix corresponding to each of the aforementioned granularities to obtain an elevation map feature vector; A method comprising the step of integrating the ray feature vector and the elevation map feature vector into a three-dimensional scene feature corresponding to the three-dimensional scene screen.
- The object attribute information of the hit object includes the position information of the hit object, the type information of the hit object, and the material information. The step of performing a vector transformation on the object attribute information of each received hit object to obtain a basic trajectory feature vector is as follows: The steps include: performing a vector transformation on the object attribute information of each of the hit objects, applying the position information, type information, and material information of the hit object included in the object attribute information to obtain an information vector corresponding to the cone ray that hit the hit object; The method according to claim 1, comprising the step of integrating information vectors corresponding to each of the aforementioned cone rays into the basic ray feature vector.
- The step of performing a vector transformation on the position information of the hit object, the type information of the hit object, and the material information included in the object attribute information to obtain an information vector corresponding to the cone ray that hit the hit object is: The steps include: performing a vector normalization process on the position information of the hit object to obtain a position vector; The steps include: performing feature encoding on the type information and material information of the hit object to obtain an encoded vector; The step includes stitching the position vector and the encoded vector to obtain an information vector corresponding to the cone ray that hit the object, The step of integrating the information vectors corresponding to each of the aforementioned cone rays into the basic ray feature vector is: The method according to claim 2, further comprising the step of sequentially stitching together information vectors corresponding to each of the aforementioned cone rays to obtain the basic ray feature vector.
- The step of collecting elevation value matrices corresponding to each granularity, with the location of the target character object as the collection center, based on different granularities, A step of collecting N*N grids corresponding to each granularity in all directions around the location where the target character object is located, with the location of the target character object as the collection center, based on different granularities, wherein N is an integer of 1 or more. The method according to claim 1 , comprising the step of generating an elevation value matrix corresponding to each granularity based on an N*N grid corresponding to each granularity.
- The step of collecting N*N grids corresponding to each granularity in all directions around the location where the target character object is located, based on different granularities, is as follows: The steps include: setting the different granularities to be used as the unit length of the grid to be collected, which has different sizes; The process includes the step of collecting N*N grids corresponding to various unit lengths of the grids to be collected, with the location of the target character object as the collection center, according to the unit length of the grids to be collected, which have different sizes, in all four directions. The step of generating an elevation value matrix corresponding to each granularity based on an N*N grid corresponding to each granularity is: The method according to claim 4, further comprising the steps of obtaining an altitude value corresponding to the center point of each grid in the N*N grid, and generating an altitude value matrix corresponding to the granularity corresponding to the N*N grid based on the altitude values.
- The step of performing a feature dimensionality reduction process on the elevation value matrix corresponding to each of the aforementioned granularities to obtain an elevation map feature vector is as follows: The steps include performing stitching and tensor transformation on the altitude value matrices corresponding to each of the aforementioned granularities to obtain an altitude map feature tensor, The method according to claim 1 , comprising the step of performing a feature dimensionality reduction process on the advanced map feature tensor to obtain the advanced map feature vector.
- The step of performing a feature dimensionality reduction process on the aforementioned basic ray feature vector to obtain a ray feature vector is: The process includes the step of performing a feature dimensionality reduction process on the basic ray feature vector via a neural network to obtain the ray feature vector, The step of performing a feature dimensionality reduction process on the elevation value matrix corresponding to each of the aforementioned granularities to obtain an elevation map feature vector is as follows: The method according to claim 1 , further comprising the step of performing a feature dimensionality reduction process on the elevation value matrix corresponding to each of the aforementioned granularities via a neural network to obtain the elevation map feature vector.
- The neural network for performing feature dimensionality reduction on the aforementioned basic ray feature vector and the neural network for performing feature dimensionality reduction on the height value matrices corresponding to each granularity are the same convolutional neural network. The step of performing a feature dimensionality reduction process on the underlying ray feature vector via a neural network to obtain the ray feature vector is as follows: The process includes the step of performing a feature dimensionality reduction process on the underlying ray feature vector via the convolutional neural network to obtain the ray feature vector, The step of performing a feature dimensionality reduction process on the elevation value matrix corresponding to each granularity via a neural network to obtain the elevation map feature vector is as follows: The method according to claim 7, further comprising the step of performing a feature dimensionality reduction process on the elevation value matrix corresponding to each granularity via the convolutional neural network to obtain the elevation map feature vector.
- The neural network includes a first neural network and a second neural network, wherein the first neural network and the second neural network are different neural networks. The step of performing a feature dimensionality reduction process on the underlying ray feature vector via a neural network to obtain the ray feature vector is as follows: The process includes the step of performing a feature dimensionality reduction process on the basic ray feature vector via the first neural network described above to obtain the ray feature vector, The step of performing a feature dimensionality reduction process on the elevation value matrix corresponding to each granularity via a neural network to obtain the elevation map feature vector is as follows: The method according to claim 7, further comprising the step of performing a feature dimensionality reduction process on the elevation value matrix corresponding to each granularity via the second neural network to obtain the elevation map feature vector.
- The step of emitting a set of visual cone rays from the target character object is: The step includes simulating the cone field of view angle and emitting a set of cone rays from the top part of the target character object, If the aforementioned cone ray hits the hit object, the step of returning the object attribute information of the hit object is: The method according to claim 1, further comprising the step of returning object attribute information of the hit object if, after each of the cone rays emitted by simulating the cone field of view has reached a length threshold, any of the cone rays have hit a hit object .
- The step of simulating the cone field of view angle and emitting a set of cone rays from the top of the target character object is: A step comprising obtaining p uniformly distributed ray directions with the top of the target character object as the center, and emitting M ray clusters in each ray direction, wherein the envelope of the M ray clusters is conical, each ray cluster contains p optical cone rays, the p optical cone rays of each ray cluster are uniformly distributed in M concentric circles, p is an integer of 2 or more, and M is an integer of 1 or more, After each of the cone rays emitted by simulating the cone field of view has reached a length threshold, if any of the cone rays have hit an object, the step of returning the object attribute information of the hit object is as follows: The method according to claim 10, further comprising the step of returning object attribute information of the hit object if, after the p cone rays of each ray cluster among the M ray clusters have reached the length threshold, any of the cone rays have hit the hit object.
- The step of integrating the aforementioned ray feature vector and the aforementioned altitude map feature vector into the 3D scene features corresponding to the 3D scene screen is: The method according to claim 1 , comprising the step of sequentially stitching together the ray feature vector and the elevation map feature vector to obtain three-dimensional scene features corresponding to the three-dimensional scene screen.
- After the step of integrating the ray feature vector and the elevation map feature vector into the 3D scene features corresponding to the 3D scene screen, the method The steps include using the 3D scene features corresponding to the aforementioned 3D scene screen as feature training samples, The steps include inputting the aforementioned feature training sample into a win rate prediction model and using the win rate prediction model to estimate the probability that the target character object will win next time, The method according to claim 1 , further comprising the step of performing reinforcement learning on the win rate prediction model to update the model parameters based on the probability value of winning next time and the expectation of winning.
- A feature extraction device for 3D scenes, A processing unit that projects a set of cone-shaped projections from a target character object onto a 3D scene screen, The system includes an acquisition unit that, when the aforementioned cone ray hits an object, returns object attribute information of the hit object, The processing unit further performs a vector transformation on the object attribute information of each received hit object to obtain a basic trajectory feature vector. The processing unit further performs a feature dimensionality reduction process on the basic ray feature vector to obtain a ray feature vector, The processing unit further collects elevation value matrices corresponding to each granularity, with the location of the target character object as the collection center, based on different granularities. The processing unit further performs feature dimensionality reduction on the height value matrix corresponding to each granularity to obtain height map feature vectors. The apparatus further includes a decision unit that integrates the ray feature vector and the altitude map feature vector into a three-dimensional scene feature corresponding to the three-dimensional scene screen.
- Computer equipment, Includes memory in which computer programs are stored, a processor and a bus system, When the processor executes the computer program, it performs the method according to any one of claims 1 to 13. The computer device is characterized in that the bus system connects the memory and the processor to communicate with each other.
- A computer program characterized by causing a computer to perform the method described in any one of claims 1 to 13.
Description
This application claims priority to the Chinese patent application No. 202211525532.8, filed with the China National Intellectual Property Office on December 1, 2022, with the title of the invention "Method, Apparatus, Device, and Storage Medium for Feature Extraction of 3D Game Scenes," and all its contents are incorporated herein by reference. The embodiments of this application relate to the field of image processing technology, and more particularly to feature extraction technology for three-dimensional scenes. In practical applications, 3D scenes (e.g., 3D game scenes) generally possess characteristics such as being unstructured, having a wide variety of objects, complex object shapes, and being difficult to model as data information. Therefore, efficiently and accurately extracting scene features from 3D scenes remains a challenging problem. Current methods for extracting features from 3D scenes primarily include the extraction of visual image features and the extraction of depth map features. However, explaining a 3D scene by extracting visual image features has shortcomings. This is because the extracted visual image features are essentially projections of the 3D scene onto a 2D plane, containing only 2D information about the 3D scene. The high data dimension of these visual image features results in high training costs when performing related machine learning tasks based on them. While depth map features contain 3D scene information, their data dimension remains high. Furthermore, the information contained within pixel points exhibits a certain degree of overlap and redundancy. Since depth map features only contain distance information, the associated machine learning tasks for inductive learning of depth map features remain costly. Combining visual image features and depth map features allows for the extraction of richer scene features, but the problem remains that the extracted feature data is high-dimensional, resulting in a high training cost for machine learning tasks. This is a schematic diagram of the architecture of the image data control system in the embodiment of this application.This is a flowchart of one embodiment of the feature extraction method for a three-dimensional scene according to the embodiments of this application.This is a flowchart of another embodiment of the 3D scene feature extraction method in the embodiment of this application.This is a flowchart of another embodiment of the 3D scene feature extraction method in the embodiment of this application.This is a flowchart of another embodiment of the 3D scene feature extraction method in the embodiment of this application.This is a flowchart of another embodiment of the 3D scene feature extraction method in the embodiment of this application.This is a flowchart of another embodiment of the 3D scene feature extraction method in the embodiment of this application.This is a flowchart of another embodiment of the 3D scene feature extraction method in the embodiment of this application.This is a flowchart of another embodiment of the 3D scene feature extraction method in the embodiment of this application.This is a flowchart of another embodiment of the 3D scene feature extraction method in the embodiment of this application.This is a flowchart of another embodiment of the 3D scene feature extraction method in the embodiment of this application.This is a flowchart of another embodiment of the 3D scene feature extraction method in the embodiment of this application.This is a flowchart of another embodiment of the 3D scene feature extraction method in the embodiment of this application.This is a schematic flowchart illustrating the principle of the feature extraction method for a three-dimensional scene in an embodiment of this application.This is a schematic diagram illustrating the principle of feature processing for the feature extraction method of a three-dimensional scene according to an embodiment of this application.This is a schematic diagram illustrating the effect of a set of cone rays ejected by the feature extraction method for a three-dimensional scene in the embodiment of this application.This is a schematic diagram of the envelope of a set of cone rays ejected by the feature extraction method for a three-dimensional scene according to an embodiment of this application.This is a schematic diagram of the cross-sectional distribution where the horizontal distance of the emission of a set of cone rays ejected by the feature extraction method for a three-dimensional scene in the embodiment of this application reaches a distance threshold.This is a schematic diagram of an embodiment of a three-dimensional scene feature extraction device according to the present invention.This is a schematic diagram of an embodiment of a computer device according to the present application. The terms “first,” “second,” “third,” “fourth,” etc. (if present) in the specification, claims, and drawings of this application are not used to indica