US-12620176-B2 - Automated aerial data capture for 3D modeling of unknown objects in unknown environments
Abstract
System and method are disclosed for multi-phase process of automated data capture for photogrammetry and 3D model building of an unknown object ( 311 ) in an unknown environment. Planner module ( 152 ) generates a flight plan ( 413 ) for a camera drone ( 110 ) to fly autonomously on a flight path along a virtual polygon grid ( 302 ) defined above the target object ( 311 ) during a survey phase. Model builder computer ( 153 ) receives a point cloud dataset ( 321 ) captured by LiDAR sensor on camera drone ( 301 ) during survey flight and constructs low resolution 3D mesh ( 331 ) of the target object ( 311 ). Planner module ( 152 ) generates a flight path ( 413 ) for camera drone inspection phase with virtual waypoints surrounding the target object ( 311 ) at a marginal distance from the surface defined by the low resolution 3D mesh ( 331 ). Model builder ( 153, 163 ) builds a high resolution 3D model ( 422 ) of the target object ( 311 ) using photogrammetry processing of high resolution images captured by camera drone ( 411, 412 ) during inspection phase.
Inventors
- Justinian Rosca
- Tao Cui
- Naveen Kumar Singa
Assignees
- SIEMENS CORPORATION
Dates
- Publication Date
- 20260505
- Application Date
- 20220824
Claims (16)
- 1 . A computer-implemented multi-phase method for automated data capture for photogrammetry and model building of an unknown target object, comprising: generating a point cloud dataset using data captured by at least one LiDAR sensor mounted to a camera drone flying autonomously on a survey flight path along a virtual polygon grid during a survey phase of operation, the virtual polygon grid defined at an altitude above the target object and situated parallel to a ground level surface; constructing a low resolution 3D mesh of the target object based on the point cloud dataset, wherein the low resolution 3D mesh comprises polygons having a smallest dimension in a range of 1 meter×1 meter to 3 meters×3 meters on a surface of the target object; generating an inspection flight path with virtual waypoints, each virtual waypoint lying on an offset surface created by uniformly expanding the low resolution 3D mesh by a predetermined distance in a range of 1 meter to 5 meters from the surface defined by the low resolution 3D mesh; capturing images having a resolution of 300 dpi or greater from a camera mounted on the camera drone flying autonomously on the inspection flight path during an inspection phase; and building, by a model building computer, a 3D model comprising a polygon mesh with RGB data having a smallest dimension in a range of 1 centimeter×1 centimeter to 10 centimeters×10 centimeters of the target object using photogrammetry processing of the images having a resolution of 300 dpi or greater during a model building phase.
- 2 . The method of claim 1 , wherein the virtual waypoints are defined based on overlapping requirements by the model builder computer for stitching images in constructing the high resolution 3D model.
- 3 . The method of claim 1 , further comprising: capturing poses and coordinates for each captured high resolution image; wherein building the high resolution 3D model includes using the poses and coordinates.
- 4 . The method of claim 1 , further comprising: capturing poses and coordinates with LiDAR sensor data during the survey phase; wherein generating the low resolution 3D mesh includes using the poses and coordinates.
- 5 . The method of claim 1 , further comprising: generating the flight path for two or more camera drones for a swarm operation during the inspection phase, wherein the flight path is divided and distributed among the two or more camera drones.
- 6 . The method of claim 1 , further comprising: receiving input from a graphical user interface indicating a region of interest on the target object; generating a flight control plan and camera control plan for capturing high resolution images having a resolution of 300 dpi or greater by defining denser inspection flight path sweeping, wherein the inspection flight path comprises virtual waypoints spaced at intervals of 1 to 5 meters from the surface of the target object, and increasing image overlapping to achieve at least 60% overlap between adjacent images.
- 7 . The method of claim 1 , wherein generating the inspection flight path further includes: using a global planner algorithm to calculate a number of sweeps necessary to cover the virtual waypoints in accordance with minimum overlap requirements, wherein the global planner algorithm processes geometry of the low resolution 3D mesh using a passthrough filter to generate a set of perimeters stacked across several altitudes of the target object, each perimeter corresponding to a virtual waypoint path, and expands the perimeters by a safety margin in the range of 1 to 5 meters on which expanded perimeters the virtual waypoints are defined.
- 8 . The method of claim 1 , wherein generating the inspection flight path further includes: expanding the low resolution 3D mesh by a safety margin in the range of 1 to 5 meters; using a local planner algorithm to apply a voxel grid filter to the expanded low resolution 3D mesh to produce voxel octree representation of the target object, wherein the voxel octree comprises voxels with a minimum edge length of 1 meter, wherein a rapidly-exploring tree (RRT) algorithm is applied to generate the set of virtual waypoints and the inspection flight path along the virtual waypoints.
- 9 . A system for automated data capture for photogrammetry and model building of an unknown target object, comprising: a processor; and a memory having modules stored thereon to perform instructions when executed by the processor, the modules comprising: a planner module configured to: generate a survey flight plan for a camera drone to fly autonomously on a survey flight path along a virtual polygon grid during a survey phase of operation, the virtual polygon grid defined at an altitude above the target object and situated parallel to a ground level surface; a model builder computer configured to: receive a point cloud dataset using data captured by at least one LiDAR sensor mounted to the camera drone; and construct a low resolution 3D mesh of the target object based on the point cloud dataset, wherein the low resolution 3D mesh has a smallest dimension spanning 1 to 3 meters on a surface of the target object; wherein the planner module is further configured to: generate an inspection flight path with virtual waypoints surrounding the target object at a marginal distance from the surface defined by the low resolution 3D mesh, wherein the marginal distance is in a range of 1 to 5 meters; and wherein the model builder computer is further configured to: receive high resolution images of the target object captured from a camera mounted on the camera drone flying autonomously on the flight path during an inspection phase, wherein the images have a resolution of 300 dpi or greater; and build a high resolution 3D model of the target object using photogrammetry processing of the high resolution images during a model building phase, wherein the high resolution 3D model comprises a polygon mesh with RGB data having a smallest dimension in a range of 1 cmx 1 cm to 10 cmx 10 cm.
- 10 . The system of claim 9 , wherein the virtual waypoints are defined based on overlapping requirements by the model builder computer for stitching images in constructing the high resolution 3D model.
- 11 . The system of claim 9 , wherein the model builder computer is further configured to: receive captured poses and coordinates for each captured high resolution image; and build the high resolution 3D model using the poses and coordinates.
- 12 . The system of claim 9 , wherein the model builder computer is further configured to: receive captured poses and coordinates with LiDAR sensor data during the survey phase; generate the low resolution 3D mesh using the poses and coordinates.
- 13 . The system of claim 9 , wherein the planner module is further configured to: generate the inspection flight path for two or more camera drones for a swarm operation during the inspection phase, wherein the flight path is divided and distributed among the two or more camera drones.
- 14 . The system of claim 9 , wherein the planner module is further configured to: receive input from a graphical user interface indicating a region of interest on the target object; generate a flight control plan and camera control plan for capturing high resolution images having a resolution of 300 dpi or greater by defining denser inspection flight path sweeping, wherein the inspection flight path comprises virtual waypoints spaced at intervals of 1 to 5 meters from the surface of the target object, and increasing image overlapping to achieve at least 60% overlap between adjacent images.
- 15 . The system of claim 9 , wherein the planner module is further configured to: use a global planner algorithm to calculate a number of sweeps necessary to cover the virtual waypoints in accordance with minimum overlap requirements, wherein the global planner algorithm processes geometry of the low resolution 3D mesh using a passthrough filter to generate a set of perimeters stacked across several altitudes of the target object, each perimeter corresponding to a virtual waypoint path, and expands the perimeters by a safety margin in the range of 1 meter to 5 meters, on which expanded perimeters the virtual waypoints are defined.
- 16 . The system of claim 9 , wherein the planner module is further configured to: expand the low resolution 3D mesh by a safety margin in the range of 1 meter to 5 meters; use a local planner algorithm to apply a voxel grid filter to the expanded low resolution 3D mesh to produce a voxel octree representation of the target object, wherein the voxel octree comprises voxels with a minimum edge length of 1 meter, wherein a rapidly-exploring tree (RRT) algorithm is applied to generate the set of virtual waypoints and the inspection flight path along the virtual waypoints.
Description
STATEMENT REGARDING FEDERALLY SPONSORED DEVELOPMENT Development for this invention was supported in part by Subaward Agreement No. ARM-TEC-19-04-F10 awarded by Advanced Robotics for Manufacturing Institute (ARM). The United States Government has certain rights in this invention. TECHNICAL FIELD This application relates to automated photography and photogrammetry. More particularly, this application relates to automated image collection for photogrammetry and 3D modeling of unknown objects in unknown environments. BACKGROUND Model building and photogrammetry is a process of obtaining 3-dimensional models of physical objects or environments through gathered images (RGB, IR), point cloud data, sensor positions, etc. of the physical objects or environments. It has a wide range of industrial applications such as digital twin modeling, structure inspection (buildings, power plants, wind turbine), equipment maintenance (ship, large vehicles/trains), geographical survey (e.g., google earth, construction planning, civil engineering) for obtaining reliable and accurate information about the environment and physical objects of interest. Traditional model building and photogrammetry requires collecting images and other sensor (LIDAR, point cloud) data from various locations, with various perspective angles, and with sufficient overlaps, such that the collection of image and auxiliary data can fully “cover” the physical object of interest in 3D space. Then that data is used in an offline process to generate the 3D model, using photogrammetry software tools such as Bentley ContextCapture1 or other equivalent tools. The offline process typically takes hours or days in intensive computation, depending on the size of the model and the computational power of the host system. If the model generated from the photogrammetry tool does not satisfy the target requirements (typically because the data collected is not sufficient or is not of good quality), then another data collection process will be needed. This traditional process is more of an “art” than a rigorous engineering process. Primarily a manual effort, the photos are taken manually by a photographer in a known environment. The photographer makes decisions as to how and where the pictures should be taken using human perception of the object and environment. The process is iterative and slow, as it relies on feedback from the offline modeling iterations lasting hours to days and reshooting photos to capture missing data. In recent years, aerial robots such as drones have been added as a tool for the data acquisition in the model building process, extending the reach of the image data collection. In most the cases, the drone is used as a flying camera, but the process remains manual for the most part with the drone being manually operated by a human pilot or drone photographer. Furthermore, drones have introduced new problems regarding safety, with no assurance for improved the data quality. For example, controlling the drone properly and safely while collecting the images in an unknown environment is difficult. A human operator has trouble perceiving a flying object's location, especially as it moves further away in distance. The operator also needs to contend with safety, avoid collisions while the drone is close to the object under inspection, while navigating the mobile robot along a precise sweeping path. The image gathering quality requires precise poses and path overlaps while flying the drone around an unknown object that may have hidden obstacles. All of these challenges are magnified for modeling tasks involving supersized structures. SUMMARY System and method are disclosed with a multi-phase workflow for automated drone-based aerial data capture and a photogrammetry process for model building of unknown objects in unknown environments. During a survey phase, a camera drone autonomously obtains rough environment data in an online and safe manner. From this rough environment data, a low resolution 3D mesh of the target object is generated. In an inspection phase, a navigation plan is generated from the first phase information, and a camera drone autonomously executes a flight plan that ensures safety while capturing high quality images for a complete data set. Once all image data is retrieved, a model building phase involves employing a model builder computer with modeling software to generate a high resolution 3D model useful for digital twin applications. Methods disclosed use aerial drones equipped with various sensors (e.g., LiDAR and camera) capable of inspecting a large outdoor structure. However, it can be applicable to a wider range of use cases and examples involving image data collection for model building. BRIEF DESCRIPTION OF THE DRAWINGS Non-limiting and non-exhaustive embodiments of the present disclosure are described with reference to the following FIGURES, wherein like reference numerals refer to like elements throughout the drawings unless otherw