EP-4735942-A2 - IMAGE COMPOSITING WITH ADJACENT LOW-PARALLAX CAMERAS
Abstract
A low-parallax multi-camera imaging system may enable combination of images from multiple camera channels into a panoramic image. In some examples, the imaging system may be designed to include small areas of overlap between adjacent camera channels. Panoramic images may be generated by compositing image data from multiple camera channels by using techniques described herein. In some examples, contribution of each camera channel may be weighted based on factors such as distances relative to an overlap region or content within the overlap region.
Inventors
- NIAZI, Zakariya
- KURTZ, ANDREW, F.
- STUBLER, PETER, O.
- BOWRON, JOHN
- BALLER, Mitchell, H.
- KRISILOFF, Allen
- ANNESE, Grace
Assignees
- Circle Optics, Inc.
Dates
- Publication Date
- 20260506
- Application Date
- 20240712
Claims (15)
- 1. A multi-camera system for generating a panoramic image, the multi-camera system comprising: a plurality of camera channels, individual of the plurality of camera channels being configured to capture image data in a respective field of view; memory; a processor; and computer-executable instructions stored in the memory and executable by the processor to perform operations comprising: receiving information specifying a panoramic image to be generated; for a pixel location in the panoramic image, determining, based on the information and camera configuration data associated with the plurality of camera channels, at least a first camera channel associated with a first field of view and a second camera channel associated with a second field of view, wherein the first field of view and the second field of view include the pixel location; determining, based on the camera configuration data, an overlap region between a first image captured by the first camera channel and a second image captured by the second camera channel; determining, based on a first portion of the first image in the overlap region and a second portion of the second image in the overlap region, a pixel value associated with the pixel location; and generating the panoramic image including the pixel value at the pixel location.
- 2. The multi-camera system of claim 1, wherein determining the pixel value comprises: determining a weighted average of a first value of a first pixel in the first portion of the first image and a second value of a second pixel in the second portion of the second image, wherein the pixel value associated with the pixel location is based on the weighted average.
- 3. The multi-camera system of claim 2, wherein weights of the weighted average are based on a first distance between the first pixel and an edge of the overlap region and a second distance between the second pixel and the edge of the overlap region.
- 4. The multi-camera system of claim 2, wherein weights of the weighted average are based on a first distance between the first pixel and a center pixel of the first image and a second distance between the second pixel and a center pixel of the second image.
- 5. The multi-camera system of claim 1, wherein determining the pixel value is based on inputting, to a machine-learned model, the first value of the first pixel and the second value of the second pixel.
- 6. The multi-camera system of claim 1, wherein determining the pixel value comprises: determining a first weight corresponding to the first image and a second weight corresponding to the second image; and sampling pixel values from the first image and the second image based on the first weight and the second weight, wherein the pixel value is based on the sampled pixel values.
- 7. The multi-camera system of claim 1, wherein determining the pixel value is based on content of the first image and the second image in the overlap region.
- 8. The multi-camera system of claim 7, the operations further comprising: determining a frequency signature of the content; based on the frequency signature, determining the pixel value using one of: weighted average of pixel values of the first image and the second image, or stochastic sampling of pixel values of the first image and the second image.
- 9. The multi-camera system of claim 7, wherein the content comprises one of: a flare or a veiling glare.
- 10. The multi-camera system of claim 1, wherein: the plurality of camera channels comprise at least three low-parallax camera channels, the field of view comprises a polygon of more than four sides, and the panoramic image comprises an equirectangular panorama.
- 11. The multi-camera system of claim 1, wherein the camera configuration data includes intrinsic calibration data and extrinsic calibration of the plurality of camera channels.
- 12. The multi-camera system of claim 1, wherein determining the first camera channel comprises: determining, based on the camera configuration data, a location on an imaging sphere corresponding to the multi-camera system associated with the pixel location in the panoramic image; and determining that the first field of view includes the location on the imaging sphere.
- 13. The multi-camera system of claim 1, the operations further comprising: determining respective exposure levels associated with the first camera channel and the second camera channel; adjusting, based on the respective exposure levels, pixel values in the overlap region of the first image and the second image.
- 14. The multi-camera system of claim 1, the operations further comprising: receiving, an object track associated with two or more camera channels of the plurality of camera channels, wherein determining the first image and the second image is based on the object track.
- 15. The multi-camera system of claim 1, wherein the panoramic image is a first panoramic image of a scene and the first image and the second image are captured from a first position of the multi-camera system, the operations further comprising: receiving a set of images of the scene captured from a second position of the multi-camera system; determining, based on the set of images, a second panoramic image; and determining, based on the first panoramic image and the second panoramic image, a 3D model of a portion of the scene.
Description
IMAGE COMPOSITING WITH ADJACENT LOW-PARALLAX CAMERAS CROSS REFERENCE TO RELATED APPLICATION [0001] This disclosure claims benefit of priority of: U.S. Provisional Patent Application Ser. No. 63/513,707, entitled “Image Compositing with Adjacent Low Parallax Cameras,” and U.S. Provisional Patent Application Ser. No. 63/513,721 entitled “Visor Type Camera Array Systems,” both of which were filed on July 14, 2023, and the entirety of each of which is incorporated herein by reference. DISCLOSURE [0002] This invention was made with U.S. Government support under grant number 2136737 awarded by the National Science Foundation. The Government has certain rights to this invention. TECHNICAL FIELD [0003] The present disclosure relates to panoramic low-parallax multi-camera capture devices having a plurality of adjacent and abutting polygonal cameras and to techniques for processing images generated by the individual cameras of such device. The disclosure also relates to the methods and systems for calibrating, tiling, and blending the individual images and an aggregated panoramic image. BACKGROUND [0004] Panoramic cameras have substantial value because of their ability to simultaneously capture wide field of view images. The earliest such example is the fisheye lens, which is an ultra-wide-angle lens that produces strong visual distortion while capturing a wide panoramic or hemispherical image. While the field of view (FOV) of a fisheye lens is usually between 100 and 180 degrees, the approach has been extended to yet larger angles, including into the 220-270° range, as provided by Y. Shimizu in US 3,524,697. [0005] As another alternative, panoramic multi-camera devices, with a plurality of cameras arranged around a sphere or a circumference of a sphere, are becoming increasingly common. However, in most of these systems, including those described in US 9,451,162 and US 9,911,454, both to A. Van Hoff et al., of Jaunt Inc., the plurality of cameras are sparsely populating the outer surface of the device. In order to capture complete 360-degree panoramic images, including for the gaps or seams between the adjacent individual cameras, the cameras then have widened FOVs that overlap one to another. In some cases, as much as 50% of a camera’s FOV or resolution may be used for camera-to-camera overlap, which also creates substantial parallax differences between the captured images. Parallax is the visual perception that the position or direction of an object appears to be different when viewed from different positions. Then in the subsequent image processing, the excess image overlap and parallax differences both complicate and significantly slow the efforts to properly combine, tile or stitch, and synthesize acceptable images from the images captured by adjacent cameras. As discussed in US 11,064,116, by B. Adsumilli et al., in a multi-camera system, differences in parallax and performance between adjacent cameras can be corrected by actively selecting the image stitching algorithm to apply based on detected image feature data. [0006] As an alternative, US 10,341,559 by Z. Niazi provides a multi-camera system in which a plurality of adjacent low parallax cameras are assembled together in proximity along parallel edges, to produce real-time composite panoramic images. The processes and methods to calibrate individual camera channels and their images and assemble the composite images can affect the resulting panoramic image quality or analytics. [0007] Thus, there remain opportunities to improve the assembly of the individual and aggregated images produced by low parallax panoramic multi-camera devices. In particular, the development of improved and optimized image calibration and rendering techniques can both improve various aspects of output image quality for viewing or for analytical applications, and reduce the image processing time, to facilitate the real-time output of tiled composite panoramic images. BRIEF DESCRIPTION OF THE DRAWINGS [0008] The detailed description is described with reference to the accompanying figures. The same reference numbers in different figures indicate similar or identical items. [0009] FIG. 1 depicts a 3D view of a portion of a multi-camera capture device, and specifically two adjacent cameras thereof. [0010] FIG. 2A depicts a cross-sectional view of an example improved imaging lens system that may be used in a multi-camera capture device. [0011] FIG. 2B depicts a cross-sectional view of a low-parallax volume of the example imaging lens system of FIG. 2A. [0012] FIG. 2C depicts front color at an edge of an outer lens element of the example imaging lens system of FIG. 2A. [0013] FIG. 2D depicts a graph of parallax differences for a camera channel, relative to a center of perspective. [0014] FIG. 3 A and FIG. 3B depict fields of view for adjacent cameras, including both core and extended fields of view (FOV), useful in designing the multi-camera capture device of FIG. 1. [0015] FIG. 4A de