Search

CN-121999107-A - Computer-implemented method for updating views of a spatial scene

CN121999107ACN 121999107 ACN121999107 ACN 121999107ACN-121999107-A

Abstract

The invention relates to a computer-implemented method for updating a view of a spatial scene, comprising the steps of S1) receiving image points of the view of the spatial scene, S2) capturing a plurality of capture points of the scene by means of a camera, S3) assigning capture points to respectively corresponding image points of the view on the basis of position information, S4) determining a corresponding deviation, S5) excluding from the update capture points whose deviations are below a tolerance threshold, S6) summarizing the plurality of image points of the view into sub-views, S7) determining a spatial environment for sub-view image points whose deviations are above a subset of the tolerance threshold, S8) determining whether at least one capture point is located within a volume spanned by the determined spatial environment and an image sensor of the camera, S9) excluding from the update the sub-view image points if the deviation of at least one capture point located within the volume is above a motion threshold, S10) updating only image points of the view that are not excluded from the update, S11) providing an updated view.

Inventors

  • M-H. Koch

Assignees

  • 西门子医疗股份公司

Dates

Publication Date
20260508
Application Date
20251105
Priority Date
20241107

Claims (14)

  1. 1. A computer-implemented method for updating views of a spatial scene, the method comprising the steps of: s1) receiving image points of a view of a spatial scene, S2) acquiring a plurality of acquisition points of the scene by means of a camera, wherein each acquisition point is assigned position information, S3) correspondingly configuring the acquisition points to the corresponding image points of the view according to the corresponding position information, S4) determining the respective deviation by comparing the position information of the respective acquisition point with the position information of the respective corresponding image point, S5) excluding from the update those acquisition points whose deviation is below the tolerance threshold, It is characterized in that the method comprises the steps of, S6) summarizing a plurality of image points of the view into sub-views, S7) if the deviation is above the tolerance threshold for one subset of sub-view image points and the deviation is below the tolerance threshold for another subset of sub-view image points, determining a spatial environment for sub-view image points of the subset having a deviation above the tolerance threshold, S8) determining whether at least one acquisition point is located within a volume spanned by the determined spatial environment and an image sensor of the camera, and S9) if the deviation of at least one acquisition point located within the volume is above a motion threshold, excluding the sub-view image point from the update, S10) updating only the image points of the view that are not excluded from updating based only on the acquisition points that are not excluded from updating, S11) provides updated views.
  2. 2. The method of claim 1, wherein if no sub-view image points are allowed to be updated, then keeping the sub-view unchanged.
  3. 3. The method according to claim 1, wherein the sub-view is kept unchanged if at most only a limited part of the sub-view image points, in particular at most half of the sub-view image points, are allowed to be updated.
  4. 4. The method according to any of the preceding claims, wherein the environment of sub-view image points is determined to be circular or spherical.
  5. 5. The method according to any of the preceding claims, wherein the environment is determined such that it contains all sub-view image points with a deviation above a tolerance threshold.
  6. 6. A method according to any of the preceding claims, wherein the comparison of the position information is performed by determining the spatial distance as a deviation.
  7. 7. A method according to any of the preceding claims, wherein the tolerance threshold is 5cm to 50cm, preferably 15cm.
  8. 8. A method according to any one of the preceding claims, wherein the movement threshold is from 35cm to 65cm, preferably 50cm.
  9. 9. The method of any of the preceding claims 1 to 7, wherein the motion threshold is determined to be half the distance of the sub-view image point to the camera.
  10. 10. The method according to any of the preceding claims, wherein the view of the spatial scene has the format of a 3D mesh model.
  11. 11. Method according to any of the preceding claims, wherein to obtain a sub-view the view is segmented, wherein a continuous area of the view is determined by segmentation, wherein image points belonging to the continuous area are aggregated into the sub-view.
  12. 12. A providing unit for providing an updated view of a spatial scene, the providing unit comprising computing means for performing the steps of the method according to any of the preceding method claims.
  13. 13. A computer program which, when executed on a computing device, causes the computing device to perform the steps of the method according to any of the preceding method claims.
  14. 14. An electronically readable data carrier on which a computer program according to the preceding computer program claim is stored.

Description

Computer-implemented method for updating views of a spatial scene Technical Field The present invention relates to a computer-implemented method for updating views of a spatial scene, and a corresponding system, computer program and computer program product. Background Computer-implemented methods for updating views of a spatial scene are known from the prior art. Known solutions for capturing and updating a spatial scene include capturing a point cloud using a suitable camera, such as a depth camera, a time-of-flight camera, or a LIDAR camera (light imaging detection and ranging). Such cameras are only able to gather 3D information for those elements of the spatial scene that are visible to the camera. Since a complete 3D scene cannot be acquired, it is also called a 2.5D camera. When analyzing a point cloud of a spatial scene acquired by a 2.5D camera or LIDAR camera, surfaces and edges may be identified and mapped in the form of a 3D mesh model. The acquired scene may be continuously updated to ensure that, for example, movable objects and changes are identified within the scope of the system for avoiding collisions. Systems for collision avoidance are used, for example, for robot-assisted X-ray angiography systems. It is known from the prior art to segment acquisition points of a point cloud acquired by a camera. The segmentation here causes the acquisition points to be divided into sub-point clouds. Thus, the acquisition points of the view are divided into sub-views. The segmentation may for example be such that the segmentation is divided into sub-point clouds or sub-views, which may correspond to elements of the spatial scene, i.e. for example objects or persons. Furthermore, it is known to model these elements or corresponding sub-point clouds as 3D models, for example as 3D mesh models. By means of such segmentation and modeling, the acquisition points are assigned to the segmented sub-point clouds or sub-views or 3D models, in particular 3D mesh models, of the respective elements of the spatial scene. A problem, in particular in real-time applications, for example for collision avoidance, is the high computational effort in converting a point cloud into a 3D mesh model. Furthermore, the measurement accuracy of the image sensor used is limited. Both of these make it difficult to identify movements and changes in the scene, which should be done as fast and without delay as possible. Both are integrated into the motion planning to avoid collisions. The invention is based on the insight that static elements of a scene create significant redundancy in converting a point cloud into a 3D mesh model, that for static elements of the scene that are not moving (e.g. walls, tables, devices) a post-processing effort for converting into a mesh model is not necessary. Disclosure of Invention The technical problem to be solved by the present invention is to improve and accelerate the method for converting a point cloud into a 3D mesh model for updating a spatial scene, to increase the accuracy of the method and to reduce redundancy and inefficiency of such a method by improving the differentiation between static and dynamic elements in the scene. The present invention solves the technical problem by a computer-implemented method, a providing unit, a computer program and a computer program product for updating views of a spatial scene. According to the present invention, a computer-implemented method for updating views of a spatial scene is presented. The computer-implemented method for updating views of a spatial scene according to the invention comprises the steps of: s1) receiving image points of a view of a spatial scene, S2) acquiring a plurality of acquisition points of the scene by means of a camera, wherein each acquisition point is assigned position information, S3) correspondingly configuring the acquisition points to the corresponding image points of the view according to the corresponding position information, S4) determining the respective deviation by comparing the position information of the respective acquisition point with the position information of the respective corresponding image point, S5) excluding from the update those acquisition points whose deviation is below the tolerance threshold, It is characterized in that the method comprises the steps of, S6) summarizing a plurality of image points of the view into sub-views, S7) if the deviation is above the tolerance threshold for one subset of sub-view image points and the deviation is below the tolerance threshold for another subset of sub-view image points, determining a spatial environment for sub-view image points of the subset having a deviation above the tolerance threshold, S8) determining whether at least one acquisition point is located within a volume spanned by the determined spatial environment and an image sensor of the camera, and S9) if the deviation of at least one acquisition point located within the volume is above a motion t