US-12627905-B2 - Method and system for estimating auto-focus disparity in a CMOS image sensor
Abstract
Disclosed is a method for estimating an Auto-Focus (AF) disparity in Complementary Metal-Oxide-Semiconductor (CMOS) image sensor. The method includes acquiring left phase AF pixel data and right phase AF pixel data from a pixel array of the CMOS image sensor and generating difference channel signal and sum channel signal by pre-processing the left and right AF pixel data. The method includes filtering the difference channel signal and the sum channel signal and calculating a First-Order Derivative (FOD) of the sum channel signal. The method includes generating a T-processing signal based on a product of the difference channel signal and the sum channel signal and determining, within the generated T-processing signal, a plurality of peak locations exceeding a dynamic threshold value and estimating the AF disparity based on the plurality of peak locations and the FOD of the sum channel signal.
Inventors
- Girish Kalyanasundaram
- Sreeja J
Assignees
- SAMSUNG ELECTRONICS CO., LTD.
Dates
- Publication Date
- 20260512
- Application Date
- 20240909
- Priority Date
- 20240412
Claims (20)
- 1 . A method for estimating an Auto-Focus (AF) disparity in a Complementary Metal-Oxide-Semiconductor (CMOS) image sensor, the method comprising: acquiring left phase AF pixel data and right phase AF pixel data from a pixel array of the CMOS image sensor; pre-processing each of the left phase AF pixel data and the right phase AF pixel data using a left-to-right pixel shading mismatch compensation (SMC) module; generating a difference channel signal and a sum channel signal based on the processed left phase AF pixel data and the processed right phase AF pixel data; filtering each of the difference channel signal and the sum channel signal using a noise removal filter; calculating a First-Order Derivative (FOD) of the sum channel signal using a low-order Finite Impulse Response (FIR) filter; generating a T-processing signal based on a product of the filtered difference channel signal and the filtered sum channel signal; determining, within the generated T-processing signal, a plurality of peak locations that exceeds a dynamic threshold value; and estimating the AF disparity in the CMOS image sensor based on the determined plurality of peak locations, the filtered difference channel signal, and the calculated FOD of the sum channel signal.
- 2 . The method of claim 1 , wherein the pre-processing of the left phase AF pixel data and the right phase AF pixel data comprises: determining a difference between a luminance profile of the left phase AF pixel data and a luminance profile of the right phase AF pixel data; and mitigating, using the SMC module, a shading profile mismatch between the left phase AF pixel data and right phase AF pixel data based on the determined difference.
- 3 . The method of claim 1 , wherein: the difference channel signal corresponds to a first image signal that indicates a difference between the processed left phase AF pixel data and the processed right phase AF pixel data, and the sum channel signal corresponds to a second image signal that indicates a sum of the processed left phase AF pixel data and the processed right phase AF pixel data.
- 4 . The method of claim 1 , wherein the noise removal filter corresponds to a hybrid filter that is a combination of a first-order-derivative based filter and the low-order FIR filter.
- 5 . The method of claim 1 , wherein the generating the difference channel signal and the sum channel signal comprises: calculating a difference between the processed left phase AF pixel data and the processed right phase AF pixel data; calculating a sum of the processed left phase AF pixel data and the processed right phase AF pixel data; and generating the difference channel signal and the sum channel signal based on the calculated difference and the calculated sum.
- 6 . The method of claim 1 , wherein the FOD of the sum channel signal corresponds to a low-cost approximation of the first-order derivative of the sum channel signal.
- 7 . The method of claim 1 , wherein the estimation of the AF disparity in the CMOS image sensor comprises: determining, based on a ratio of the sum channel signal and the FOD of the sum channel signal, each of corresponding disparity values in a Region of Interest (ROI) corresponding to each peak location among the plurality of peak locations; and estimating the AF disparity in a collection of regions of interest in the CMOS image sensor as a function of the determined corresponding disparity values.
- 8 . The method of claim 7 , wherein the function of the determined corresponding disparity values corresponds to a weighted mean of the determined corresponding disparity values in the ROI corresponding to each peak location among the plurality of peak locations.
- 9 . A camera autofocus system, comprising: a Complementary Metal-Oxide-Semiconductor (CMOS) image sensor including a pixel array; a left-to-right pixel shading mismatch compensation (SMC) module; and at least one processor communicatively coupled with the CMOS image sensor and the SMC module, wherein the at least one processor is configured to acquire left phase Auto-Focus (AF) pixel data and right phase Auto-Focus (AF) pixel data from the pixel array of the CMOS image sensor; pre-process each of the left phase AF pixel data and the right phase AF pixel data using the SMC module; generate a difference channel signal and a sum channel signal based on the processed left phase AF pixel data and the processed right phase AF pixel data; filter each of the difference channel signal and the sum channel signal using a noise removal filter; calculate a First-Order Derivative (FOD) of the sum channel signal using a low- order Finite Impulse Response (FIR) filter; generate a T-processing signal based on a product of the filtered difference channel signal and the filtered sum channel signal; determine, within the generated T-processing signal, a plurality of peak locations that exceeds a dynamic threshold value; and estimate an Auto-Focus (AF) disparity in the CMOS image sensor based on the determined plurality of peak locations, the filtered difference channel signal, and the calculated FOD of the sum channel signal.
- 10 . The system of claim 9 , wherein to pre-process the left phase AF pixel data and the right phase AF pixel data, the at least one processor is configured to: determine a difference between a luminance profile of the left phase AF pixel data and a luminance profile of the right phase AF pixel data; and mitigate, using the SMC module, a shading profile mismatch between the left phase AF pixel data and right phase AF pixel data based on the determined difference.
- 11 . The system of claim 9 , wherein, the difference channel signal corresponds to a first image signal that indicates a difference between the processed left phase AF pixel data and the processed right phase AF pixel data, and the sum channel signal corresponds to a second image signal that indicates a sum of the processed left phase AF pixel data and the processed right phase AF pixel data.
- 12 . The system of claim 9 , wherein the noise removal filter corresponds to a hybrid filter that is a combination of a first-order-derivative based filter and the low-order FIR filter.
- 13 . The system of claim 9 , wherein to generate the difference channel signal and the sum channel signal, the at least one processor is configured to: calculate a difference between the processed left phase AF pixel data and the processed right phase AF pixel data; calculate a sum of the processed left phase AF pixel data and the processed right phase AF pixel data; and generate the difference channel signal and the sum channel signal based on the calculated difference and the calculated sum.
- 14 . The system of claim 9 , wherein the FOD of the sum channel signal corresponds to a low-cost approximation of the first order derivative of the sum channel signal.
- 15 . The system of claim 9 , wherein to estimate the AF disparity in the CMOS image sensor, the at least one processor is configured to: determine, based on a ratio of the sum channel signal and the FOD of the sum channel signal, each of corresponding disparity values in a Region of Interest (ROI) corresponding to each peak location among the plurality of peak locations; and estimate an AF disparity in a collection of regions of interest in the CMOS image sensor as a function of the determined corresponding disparity values.
- 16 . The system of claim 15 , wherein the function of the determined corresponding disparity values corresponds to a weighted mean of the determined corresponding disparity values in the ROI corresponding to each peak location among the plurality of peak locations.
- 17 . A Complementary Metal-Oxide-Semiconductor (CMOS) image sensor, comprising: a pixel array; a left-to-right pixel shading mismatch compensation (SMC) module; and at least one processor operatively coupled to the SMC module, wherein the at least one processor is configured to acquire left phase Auto-Focus (AF) pixel data and right phase Auto-Focus (AF) pixel data from the pixel array; pre-process each of the left phase AF pixel data and the right phase AF pixel data using the SMC module; generate a difference channel signal and a sum channel signal based on the processed left phase AF pixel data and the processed right phase AF pixel data; filter each of the difference channel signal and the sum channel signal using a noise removal filter; calculate a First-Order Derivative (FOD) of the sum channel signal using a low- order Finite Impulse Response (FIR) filter; generate a T-processing signal based on a product of the filtered difference channel signal and the filtered sum channel signal; determine, within the generated T-processing signal, a plurality of peak locations that exceeds a dynamic threshold value; and estimate an Auto-Focus (AF) disparity in the CMOS image sensor based on the determined plurality of peak locations, the filtered difference channel signal, and the calculated FOD of the sum channel signal.
- 18 . The CMOS image sensor of claim 17 , wherein to pre-process the left phase AF pixel data and the right phase AF pixel data, the at least one processor is configured to: determine a difference between a luminance profile of the left phase AF pixel data and a luminance profile of the right phase AF pixel data; and mitigate, using the SMC module, a shading profile mismatch between the left phase AF pixel data and right phase AF pixel data based on the determined difference.
- 19 . The CMOS image sensor of claim 17 , wherein the noise removal filter corresponds to a hybrid filter that is a combination of a first-order-derivative based filter and the low-order FIR filter.
- 20 . The CMOS image sensor of claim 17 , wherein to generate the difference channel signal and the sum channel signal, the at least one processor is configured to: calculate a difference between the processed left phase AF pixel data and the processed right phase AF pixel data; calculate a sum of the processed left phase AF pixel data and the processed right phase AF pixel data; and generate the difference channel signal and the sum channel signal based on the calculated difference and the calculated sum.
Description
CROSS-REFERENCE TO RELATED APPLICATION This application is based on and claims priority under 35 U.S.C. § 119 to Indian Patent Application No. 202441029873, filed on Apr. 12, 2024, in the Indian Intellectual Property Office, the disclosure of which is incorporated by reference herein in its entirety. TECHNICAL FIELD Various example embodiments generally relate to the field of semiconductor devices, and more particularly relates to a method and a system for estimating auto-focus disparity in a Complementary Metal-Oxide-Semiconductor (CMOS) image sensor. DISCUSSION OF RELATED ARTS In the realm of photography and imaging, achieving precise focus plays a significant role in capturing high-quality images. One such technique is phase-difference Auto-Focus (AF), which utilizes specialized sensor elements known as AF pixels. The AF pixels are designed to receive only one of either a half, quarter, or other such fraction of a beam of incoming light. An image which is aggregated from a set of AF pixels collecting a fraction of a beam of light coming from one particular path is spatially displaced from an image which is aggregated from a set of AF pixels collecting a fractional beam of light coming from a different path. This spatial displacement between the AF pixels is known as disparity or phase-difference, which forms a basis of phase-difference AF and is essential for determining necessary (or advantageous) lens adjustment required to achieve the AF. Further, to direct the lens accurately, the phase difference captured by the AF pixels is analyzed by a camera autofocus system. By comparing the phase difference between the two slightly displaced images, the camera autofocus system can calculate a direction and magnitude of the lens adjustment needed to achieve the AF. FIG. 1A illustrates an example scenario depicting a relationship between lens displacement and disparity in the phase-difference AF of a conventional camera autofocus system, in accordance with related arts. As shown in FIG. 1A, the overall method for estimating the disparity in the phase-difference AF involves analyzing a phase difference between two images captured by the AF pixels (e.g., left and right AF pixels) of a sensor matrix of the conventional camera autofocus system. This phase difference represents a disparity or a difference in a light path between the two images. By analyzing this disparity (D), the conventional camera autofocus system can determine a degree of focus adjustment required to achieve sharp focus and thereby generate a resulting image. Further, a detailed example flow of the conventional method for estimating the disparity in the phase-difference AF of the conventional camera autofocus system is also shown in FIG. 2, in accordance with related arts. As shown in FIG. 2, the conventional method for estimating the disparity in the phase-difference AF involves acquiring pixel data from a pixel array and feeding the pixel data to an Image Signal Processor (ISP), which is responsible for pre-processing the pixel data at a front end captured by the image sensor. Further, during the pre-processing, the ISP extracts luma components of the pixel data specifically for estimating the disparity. Thereafter, the luma components of the pixel data are analyzed to estimate a phase difference between two images (e.g., left AF image and right AF image) corresponding to the AF pixels of the pixel data (e.g., left and right AF pixels). This estimated phase difference represents the disparity or the difference in the light path between the two images. Thus, the disparity is estimated and the estimated disparity is mapped to lens displacement in order to inform an AF actuator of the conventional camera autofocus system for new lens position. Further, another conventional method for estimating the disparity in the phase-difference AF involves cost-function-based disparity estimation. The cost-function-based disparity estimation involves generating a cost function that measures a difference between captured phase information and a reference phase. By computing a minimum (or reduced) value of the cost function, the conventional camera autofocus system can determine an optimal focus position. A conventional method of cost-function based disparity estimation is shown in FIG. 1B of the drawings in accordance with related arts. As shown in FIG. 1B, at first, a typical value of cost-function is calculated between right AF pixels and left AF pixels to estimate the disparity in the phase-difference AF. Thereafter, for each value of estimated disparity, the left and right AF images are shifted with respect to a minimum (or reduced) cost function in order to map the estimated disparity to lens displacement. Furthermore, as technology advances, there is an increasing demand for AF algorithms to be integrated into sensor hardware, allowing for more efficient and streamlined autofocus capabilities. The need for fast and reliable autofocus convergence is partic