Search

EP-3769257-B1 - SYSTEM AND METHOD FOR DYNAMICALLY ADJUSTING LEVEL OF DETAILS OF POINT CLOUDS

EP3769257B1EP 3769257 B1EP3769257 B1EP 3769257B1EP-3769257-B1

Inventors

  • HARVIAINEN, Tatu V. J.

Dates

Publication Date
20260513
Application Date
20190312

Claims (14)

  1. A method comprising: receiving point cloud data representing one or more three-dimensional objects for use as content in an immersive environment, wherein the point cloud data is segmented into segments representing respective objects and assigned with labels associated with the respective objects; determining a viewpoint of a user relative to the point cloud data, wherein the user is viewing the content in the immersive environment; selecting an object from the one or more three-dimensional objects using the viewpoint; obtaining a neural network model based on a label associated with the selected object, wherein the neural network model has been previously trained for increasing the level of detail of the selected object; generating an increased level of detail for the selected object using the obtained neural network model; and rendering a view of the point cloud data comprising the increased level of detail.
  2. The method of claim 1, wherein generating the increased level of detail for the selected object comprises hallucinating additional details for the selected object.
  3. The method of any one of claims 1 to 2, wherein generating the increased level of detail for the selected object comprises using the neural network to infer details lost due to a limited sampling density for the selected object.
  4. The method of any one of claims 1 to 3, further comprising replacing, within the point cloud data, points corresponding to the selected object with the increased level of detail.
  5. The method of claim 4, wherein the increased level of detail for the selected object has a greater resolution than the points replaced within the point cloud data.
  6. The method of claim 5, wherein the selecting of the object from the one or more three-dimensional objects using the viewpoint comprises: responsive to determining that a point distance between the viewpoint and an object picked from the one or more three-dimensional objects is less than a threshold, selecting the object as the selected object.
  7. The method of any one of claims 1 to 6, further comprising: detecting, at a viewing client, one or more objects within the point cloud data, wherein the selecting of the object comprises selecting the object from the one or more objects detected within the point cloud data.
  8. The method of any one of claims 1 to 7, further comprising: capturing, at a viewing client, data indicative of user movement; and setting the viewpoint using, at least in part, the data indicative of user movement.
  9. The method of any one of claims 1 to 8, wherein receiving point cloud data comprises receiving point cloud data from a sensor.
  10. The method of any one of claims 1 to 9, wherein obtaining the neural network model comprises retrieving the neural network model for the selected object from a server.
  11. The method of any one of claims 1-10, wherein the obtained neural network model is based on an object category with which the selected object most closely matches.
  12. The method of any one of claims 1 to 11, wherein selecting the object selects an object from the one or more three-dimensional objects for which a level of detail is to be increased, and wherein obtaining the neural network model corresponding to the selected object comprises determining whether the selected object corresponds to a new neural network model not present on a client device.
  13. A method according to any one of claims 1-12, wherein the immersive environment is directed to one of virtual reality, VR, augmented reality, AR, and mixed reality, MR.
  14. A system comprising: one or more processors; and one or more non-transitory computer-readable mediums storing instructions that are operative, when executed by the processor, to perform the method of any of clams 1 through 13.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS The present application is a non-provisional filing of, and claims benefit under 35 U.S.C. §119(e) from, U.S. Provisional Patent Application Serial No. 62/645,618, entitled "System and Method for Dynamically Adjusting Level of Details of Point Clouds," filed March 20, 2018. BACKGROUND As virtual reality (VR) and augmented reality (AR) platforms are moving toward increasing consumer adoption, demand for fully three dimensional (3D) spatial content viewing is growing. The traditional de facto standard for such fully 3D content has been polygonal 3D graphics, manually produced by modeling and rendered with tools and techniques used for creating real-time 3D games. However, emerging mixed reality (MR) displays and content capture technologies, such as RGB-D sensors and light field cameras, have set new potential evolving best practices regarding the manner in which immersive spatial 3D content may be produced and distributed. Spatial capturing systems collecting point cloud data from real-world environments can produce large amounts of point cloud data that can be used as content for immersive experiences. However, capture devices that produce point clouds of real-world environments, often have limited accuracy and may operate best for relatively close-range observation. This potential limitation may often result in limited resolution and accuracy of the captured point clouds. Representing high level of details with point clouds may require large amounts of data that can burden or overwhelm data distribution bandwidth, potentially curtailing the practical or useable level of detail for a point cloud. SUMMARY An embodiment of the present invention includes a method according to claim 1. In some embodiments , generating the level of detail data for the selected object may include hallucinating additional details for the selected object. In some embodiments , hallucinating additional details for the selected object may increase sampling density of the point cloud data. In some embodiments , generating the level of detail data for the selected object may include using a neural network to infer details lost due to a limited sampling density for the selected object. In some embodiments , using a neural network to infer details lost for the selected object may increase sampling density of the point cloud data. The method may further include: selecting a training data set for the selected object; and generating the neural network model using the training data set. In some embodiments , the level of detail data for the selected object may have a lower resolution than the points replaced within the point cloud data. The level of detail data for the selected object may have a greater resolution than the points replaced within the point cloud data. In some embodiments , selecting the selected object from the one or more three-dimensional objects using the viewpoint may include responsive to determining a point distance between an object picked from the one or more three-dimensional objects and the viewpoint is less than a threshold, selecting the object as the selected object. Some embodiments may further include: detecting, at a viewing client, one or more objects within the point cloud data, wherein selecting the selected object may include selecting the selected object from the one or more objects detected within the point cloud data. Some embodiments may further include: capturing, at a viewing client, data indicative of user movement; and setting the viewpoint using, at least in part, the data indicative of user movement. The method may further include: capturing motion data of a head mounted display (HMD); and setting the viewpoint using, at least in part, the motion data. Retrieving the neural network model for the selected object may include responsive to determining a viewing client lacks the neural network model, retrieving, from a neural network server, the neural network model for the selected object. In some embodiments , retrieving the neural network model may include retrieving an updated neural network model for the selected object. The method may further include: identifying, at a first server, a second server; and transmitting, to a client device, an identification of the second server, wherein retrieving the neural network model may include retrieving the neural network model from the second server. Retrieving the neural network model may include: requesting, at a point cloud server, the neural network model; receiving, at the point cloud server, the neural network model; and transmitting, to a client device, the neural network model. In some embodiments, receiving point cloud data may include receiving point cloud data from a sensor. A further embodiment of the present invention discloses a system according to claim 14. The system may further include one or more graphics processors. The system may further include one or more sensors. BRIEF DESCRIPTION OF THE DRAWI