US-12626521-B2 - Systems and methods for depth map sampling
Abstract
An electronic imaging device and method for image capture are described. The imaging device includes a camera configured to obtain image information of a scene and that may be focused on a region of interest in the scene. The imaging device also includes a LIDAR unit configured to obtain depth information of at least a portion of the scene at specified scan locations of the scene. The imaging device is configured to detect an object in the scene and provides specified scan locations to the LIDAR unit. The camera is configured to capture an image with an adjusted focus based on depth information, obtained by the LIDAR unit, associated with the detected object.
Inventors
- Albrecht Johannes Lindner
- Volodimir Slobodyanyuk
- Stephen Michael Verrall
- Kalin Mitkov Atanassov
Assignees
- QUALCOMM INCORPORATED
Dates
- Publication Date
- 20260512
- Application Date
- 20240109
Claims (20)
- 1 . An imaging device, comprising: a depth sensor configured to obtain depth information of at least a portion of a scene; and a processor configured to: obtain first depth information obtained from the depth sensor based on a first scan iteration of the depth sensor; generate a segmentation of image information of the scene; detect an object in the scene using the image information of the scene based on feedback information including the first depth information based on the first scan iteration of the depth sensor; provide one or more instructions to the depth sensor to obtain second depth information at a plurality of specified scan locations for at least a second scan iteration of the depth sensor, wherein the plurality of specified scan locations are characterized by a first scan point density for a first portion of the scene corresponding to a first segment associated with the object and a second scan point density for a second portion of the scene corresponding to a second segment, wherein the first scan point density is based on a comparison of image information of neighboring image segments within the object and a comparison of first depth information of the neighboring image segments within the object; and output the second depth information as feedback for detection of at least one of the object or an additional object in the scene using additional image information of the scene.
- 2 . The imaging device of claim 1 , wherein the depth sensor is a light-detection and ranging (LIDAR) unit configured to capture LIDAR data.
- 3 . The imaging device of claim 1 , wherein the processor is configured to: generate a first image segmentation of the image information associated with the object; and generate a second image segmentation of the image information associated with a portion of the scene not associated with the object.
- 4 . The imaging device of claim 3 , wherein one or more scan locations of the first portion of the scene are based on the first image segmentation, and wherein one or more scan locations of the second portion of the scene are based on the second image segmentation.
- 5 . The imaging device of claim 1 , wherein the first scan point density is greater than the second scan point density.
- 6 . The imaging device of claim 1 , wherein the object is a person or a face of the person.
- 7 . The imaging device of claim 6 , wherein the processor is configured: detect the face using facial recognition.
- 8 . The imaging device of claim 1 , further comprising a user interface configured to receive user input indicating a selection of the object, wherein the processor is configured to detect the object based on the user input indicating the selection of the object.
- 9 . The imaging device of claim 1 , wherein the processor is configured to: generate a depth map for at least a portion of the scene based on the second depth information.
- 10 . The imaging device of claim 9 , wherein the processor is configured to: generate a three-dimensional model of the object based on the depth map.
- 11 . The imaging device of claim 9 , wherein the processor is configured to: generate an image segmentation of the image information associated with the object; and generate a refined depth map based on the image segmentation.
- 12 . The imaging device of claim 1 , wherein the processor is configured to: determine, based on the second depth information, a focus amount for a camera configured to capture the image information.
- 13 . The imaging device of claim 1 , wherein the processor is configured to: detect one or more objects in the scene based on the second depth information.
- 14 . The imaging device of claim 13 , wherein the processor is configured to: output an indication of the detected one or more objects in the scene for controlling a vehicle.
- 15 . A method for image capture, the method comprising: obtaining first depth information obtained from a depth sensor based on a first scan iteration of the depth sensor; generating a segmentation of image information of a scene; detecting an object in the scene using the image information of the scene based on feedback information including the first depth information based on the first scan iteration of the depth sensor; outputting one or more instructions to the depth sensor to obtain second depth information at a plurality of specified scan locations for at least a second scan iteration of the depth sensor, wherein the plurality of specified scan locations are characterized by a first scan point density for a first portion of the scene corresponding to a first segment associated with the object and a second scan point density for a second portion of the scene corresponding to a second segment, wherein the first scan point density is based on a comparison of image information of neighboring image segments within the object and a comparison of first depth information of the neighboring image segments within the object; and outputting the second depth information as feedback for detection of at least one of the object or an additional object in the scene using additional image information of the scene.
- 16 . The method of claim 15 , wherein the depth sensor is a light-detection and ranging (LIDAR) unit configured to capture LIDAR data.
- 17 . The method of claim 15 , further comprising: generating a first image segmentation of the image information associated with the object; and generating a second image segmentation of the image information associated with a portion of the scene not associated with the object.
- 18 . The method of claim 17 , wherein one or more scan locations of the first portion of the scene are based on the first image segmentation, and wherein one or more scan locations of the second portion of the scene are based on the second image segmentation.
- 19 . The method of claim 15 , wherein the first scan point density is greater than the second scan point density.
- 20 . The method of claim 15 , wherein the object is a person or a face of the person.
Description
CROSS REFERENCE TO RELATED APPLICATIONS This application is a Continuation of U.S. patent application Ser. No. 17/327,309, entitled “SYSTEMS AND METHODS FOR DEPTH MAP SAMPLING,” filed May 21, 2021, which is a Continuation of patent application Ser. No. 16/359,441, entitled “SYSTEMS AND METHODS FOR DEPTH MAP SAMPLING,” filed Mar. 20, 2019, which is a Continuation of patent application Ser. No. 14/833,573, entitled “SYSTEMS AND METHODS FOR DEPTH MAP SAMPLING,” filed Aug. 24, 2015, all of which are hereby expressly incorporated by reference herein in their entirety and for all applicable purposes. FIELD OF DISCLOSURE The present disclosure relates generally to electronic devices. More specifically, the present disclosure relates to systems and methods for depth map sampling. BACKGROUND In the last several decades, the use of electronic devices has become common. In particular, advances in electronic technology have reduced the cost of increasingly complex and useful electronic devices. Cost reduction and consumer demand have proliferated the use of electronic devices such that they are practically ubiquitous in modern society. As the use of electronic devices has expanded, so has the demand for new and improved features of electronic devices. More specifically, electronic devices that perform new functions and/or that perform functions faster, more efficiently or with higher quality are often sought after. Some electronic devices (e.g., cameras, video camcorders, digital cameras, cellular phones, smart phones, computers, televisions, etc.) may create a depth map using a LIDAR (light+radar) scan. A dense sampling of a scene using LIDAR scanning is costly in terms of time and power. This may result in low frame rates and battery drainage. As can be observed from this discussion, systems and methods that improve LIDAR depth map sampling may be beneficial. SUMMARY An electronic device is described. The electronic device includes a camera configured to capture an image of a scene. The electronic device also includes an image segmentation mapper configured to perform segmentation of the image based on image content to generate a plurality of image segments, each of the plurality of image segments associated with spatial coordinates indicative of a location of each segment in the scene. The electronic device further includes a memory configured to store the image and the spatial coordinates. The electronic device additionally includes a LIDAR (light+radar) unit, the LIDAR unit steerable to selectively obtain depth values corresponding to at least a subset of the spatial coordinates. The electronic device further includes a depth mapper configured to generate a depth map of the scene based on the depth values and the spatial coordinates. At least a portion of the image segments comprise non-uniform segments that define borders of an object within the image. The image segmentation mapper may be configured to perform segmentation based on an image complexity. A quantity of segments generated may be a function of a determined complexity of the image. The LIDAR unit may be configured to perform a coarse scan over a region of the scene containing a substantially uniform object within the scene. The image segmentation mapper may be configured to provide the spatial coordinates to the LIDAR unit and may be configured to provide the segments of the image to the depth mapper. The depth mapper may be configured to generate the depth map by merging the segments with corresponding depth values. The depth mapper may be configured to generate the depth map by populating each segment with a corresponding depth value obtained by the LIDAR unit at the spatial coordinates. The number of depth values obtained by the LIDAR unit may be configured to be adjusted based on feedback from a prior depth map. The spatial coordinates of the segments may correspond to centroids of the segments. A method is also described. The method includes capturing an image of a scene. The method also includes performing segmentation of the image based on image content to generate a plurality of image segments, each of the plurality of segments associated with spatial coordinates indicative of a location of each segment in the scene. The method further includes obtaining, by a LIDAR (light+radar) unit, depth values corresponding to at least a subset of the spatial coordinates. The LIDAR unit is steerable to selectively obtain the depth values. The method additionally includes generating a depth map of the scene based on the depth values and the spatial coordinates. An apparatus is also described. The apparatus includes means for capturing an image of a scene. The apparatus also includes means for performing segmentation of the image based on image content to generate a plurality of image segments, each of the plurality of image segments associated with spatial coordinates indicative of a location of each segment in the scene. The apparatus further includes mean