Search

US-12620112-B2 - Subsurface imaging and display of 3D digital image and 3D image sequence

US12620112B2US 12620112 B2US12620112 B2US 12620112B2US-12620112-B2

Abstract

To simulate a 3D image of a subsurface below a surface, the system having a memory device for storing an instruction, a processor in communication with the memory device configured to execute the instruction, and a subsurface image capture module in communication with the processor, the subsurface image capture module having one or more wave generating device and one or more sensor affixed to a vehicle to capture a series of digital image datasets of the subsurface with a coordinate reference data, wherein the processor executes an instruction to generate a digital model of the series of digital image datasets of the subsurface while maintaining the coordinate reference data, wherein the processor executes an instruction to determine a depth map of the digital model, and wherein the processor executes an instruction to identify a key subject point in the digital model, where subsurface includes an internal biology, below ground, underwater.

Inventors

  • Jerry Nims
  • William M. Karszes
  • Samuel Pol

Assignees

  • Jerry Nims
  • William M. Karszes
  • Samuel Pol

Dates

Publication Date
20260505
Application Date
20240917

Claims (20)

  1. 1 . A system to capture a plurality of two dimensional (2D) images of a terrain of a scene, process the images, and view a multidimensional digital image, the system comprising: a vehicle having a geocoding detector to identify coordinate reference data of said vehicle, a memory device for storing an instruction, a processor in communication with said memory device configured to execute said instruction, and an image capture module in communication with said processor, said capture module having a 2D RGB digital camera to capture a plurality of 2D digital images of the terrain and a digital depth capture device to capture a series of digital elevation scans to generate a digital elevation model of the terrain, with said coordinate reference data; said processor executes an instruction to overlay said plurality of 2D digital images of the terrain thereon said digital elevation model of the terrain while maintaining said coordinate reference data, wherein said processor executes an instruction to determine a depth map of said digital elevation model; said processor executes an instruction to save said plurality of 2D digital image of the terrain in a sequence relative to said coordinate reference data; said processor executes an instruction to align said sequence of said plurality of said 2D digital images of the terrain horizontally and vertically; said processor executes an instruction to select a key subject point in said sequence of said plurality of 2D digital images of the terrain and align said sequence of said plurality of 2D digital images about said key subject point; a display in communication with said processor, said display configured to display said multidimensional digital image, said display having a lenticular lens configured as a plurality of pixels having a refractive element integrated therein, said refractive element having a repeating series of lens segments aligned as a single layer therewith said plurality of pixels, said processor executes an instruction to interphase said sequence of said plurality of 2D digital images of the terrain aligned about said key subject point to correspond to said lenticular lens spacing; said processor executes an instruction to save said sequence of said 2D digital images of the terrain as one of a plurality of image datasets of the terrain; said processor executes an instruction to generate a digital model of said sequence of 2D digital images of the terrain while maintaining said coordinate reference data and generate the multidimensional digital image therefrom; and said processor executes an instruction to display the multidimensional digital image of the terrain on said display.
  2. 2 . The system of claim 1 , wherein said processor executes an instruction to automatically select said key subject point in said sequence of said plurality of 2D digital images of the terrain.
  3. 3 . The system of claim 1 , wherein said processor executes an instruction to enable a user to select said key subject point in said sequence of said plurality of 2D digital images of the terrain via an input from said display.
  4. 4 . The system of claim 1 , wherein said processor executes an instruction to merge said plurality of 2D digital images into a 2D digital image dataset of the terrain with said coordinate reference data.
  5. 5 . The system of claim 4 , wherein said processor executes an instruction to merge said series of digital elevation scans into a digital elevation model of the terrain with said coordinate reference data.
  6. 6 . The system of claim 5 , wherein said processor executes an instruction to overlay said 2D digital image dataset thereon said digital elevation model of the terrain while maintaining said coordinate reference data as 3D color mesh dataset.
  7. 7 . The system of claim 6 , wherein said processor executes an instruction to determine a depth map of said 3D color mesh dataset.
  8. 8 . The system of claim 7 , wherein said processor executes an instruction to identify a key subject point in said 3D color mesh dataset.
  9. 9 . The system of claim 8 , wherein said processor executes an instruction to generate a set of 3D frames of said 3D color mesh Dataset images via a virtual camera moving in an arc about said key subject point.
  10. 10 . The system of claim 9 , wherein said processor executes an instruction to horizontally align said set of 3D frames about said key subject point as a set of 3D HIT images to create a parallax between a near plane and a far plane relative to said key subject point.
  11. 11 . The system of claim 10 , wherein said processor executes an instruction to perform a dimensional image format transform of said 3D HIT images to a 3D DIF images.
  12. 12 . The system of claim 9 , wherein said processor executes an instruction to identify a first proximal plane and a second distal plane within said 3D frames.
  13. 13 . The system of claim 12 , wherein said processor executes an instruction to determine a depth estimate for said first proximal plane and said second distal plane within said 3D frames.
  14. 14 . The system of claim 11 , wherein said processor executes an instruction to align said 3D DIF images sequentially in a palindrome loop as a multidimensional digital image sequence.
  15. 15 . The system of claim 14 , wherein said processor executes an instruction to edit said multidimensional digital image sequence.
  16. 16 . The system of claim 15 , wherein said processor executes an instruction to display said multidimensional digital image sequence on said display.
  17. 17 . The system of claim 10 , wherein said processor executes an instruction to perform an interphasing of two of said 3D DIF images relative to said key subject point as a multidimensional digital image to introduce a binocular disparity between said two of said 3D DIF images.
  18. 18 . The system of claim 17 , wherein said processor executes an instruction to edit said multidimensional digital image.
  19. 19 . The system of claim 15 , wherein said processor executes an instruction to display said multidimensional digital image on said display.
  20. 20 . The system of claim 19 , wherein said display is configured having alternating digital black lines via a barrier screen.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. application Ser. No. 17/511,490 filed on Oct. 26, 2021, entitled “Subsurface Imaging and Display of 3D Digital Image and 3D Image Sequence”. The foregoing is incorporated herein by reference in their entirety. FIELD OF THE DISCLOSURE The present disclosure is directed to 2D and 3D model image capture from imaging diagnostic tools, simulating display of a 3D or multi-dimensional image sequence, and viewing 3D or multi-dimensional image. BACKGROUND The human visual system (HVS) relies on two dimensional images to interpret three dimensional fields of view. By utilizing the mechanisms with the HVS we create images/scenes that are compatible with the HVS. Mismatches between the point at which the eyes must converge and the distance to which they must focus when viewing a 3D image have negative consequences. While 3D imagery has proven popular and useful for movies, digital advertising, many other applications may be utilized if viewers are enabled to view 3D images without wearing specialized glasses or a headset, which is a well-known problem. Misalignment in these systems results in jumping images, out of focus, or fuzzy features when viewing the digital multidimensional images. The viewing of these images can lead to headaches and nausea. In natural viewing, images arrive at the eyes with varying binocular disparity, so that as viewers look from one point in the visual scene to another, they must adjust their eyes' vergence. The distance at which the lines of sight intersect is the vergence distance. Failure to converge at that distance results in double images. The viewer also adjusts the focal power of the lens in each eye (i.e., accommodates) appropriately for the fixated part of the scene. The distance to which the eye must be focused is the accommodative distance. Failure to accommodate to that distance results in blurred images. Vergence and accommodation responses are coupled in the brain, specifically, changes in vergence drive changes in accommodation and changes in accommodation drive changes in vergence. Such coupling is advantageous in natural viewing because vergence and accommodative distances are nearly always identical. In 3D images, images have varying binocular disparity thereby stimulating changes in vergence as happens in natural viewing. But the accommodative distance remains fixed at the display distance from the viewer, so the natural correlation between vergence and accommodative distance is disrupted, leading to the so-called vergence-accommodation conflict. The conflict causes several problems. Firstly, differing disparity and focus information cause perceptual depth distortions. Secondly, viewers experience difficulties in simultaneously fusing and focusing on key subject within the image. Finally, attempting to adjust vergence and accommodation separately causes visual discomfort and fatigue in viewers. Perception of depth is based on a variety of cues, with binocular disparity and motion parallax generally providing more precise depth information than pictorial cues. Binocular disparity and motion parallax provide two independent quantitative cues for depth perception. Binocular disparity refers to the difference in position between the two retinal image projections of a point in 3D space. Conventional stereoscopic displays forces viewers to try to decouple these processes, because while they must dynamically vary vergence angle to view objects at different stereoscopic distances, they must keep accommodation at a fixed distance or else the entire display will slip out of focus. This decoupling generates eye fatigue and compromises image quality when viewing such displays. Recently, a subset of photographers is utilizing 1980s cameras such as NIMSLO and NASHIKA 35 mm analog film cameras or digital camera moved between a plurality of points to take multiple frames of a scene, develop the film of the multiple frames from the analog camera, upload images into image software, such as PHOTOSHOP, and arrange images to create a wiggle gram, moving GIF effect. X-ray image quality has changed little since Tesla built his x-ray prototype in 1896. Therefore, it is readily apparent that there is a recognizable unmet need for a system having a 2D digital image and 3D model capture system of from subsurface imaging diagnostic tools, image manipulation application, display of 3D digital image sequence/display of 3D or digital multi-dimensional image that may be configured to address at least some aspects of the problems discussed above. SUMMARY Briefly described, in an example embodiment, the present disclosure may overcome the above-mentioned disadvantages and may meet the recognized need for an imaging diagnostic tool to capture a plurality of datasets, including layered image information, density of scanned material, substructure, subsurface elements, other characteristics and the like, including a smart devic