Search

EP-4738875-A1 - RECONFIGURABLE HYBRID EVENT-BASED VISION SENSOR AND CMOS IMAGE SENSOR

EP4738875A1EP 4738875 A1EP4738875 A1EP 4738875A1EP-4738875-A1

Abstract

A method to operate a reconfigurable hybrid event-based vision sensor is disclosed. The hybrid image sensor has a pixel circuit array including a plurality of pixel circuits. The method comprises: detecting motion in an external scene; and based on the detected motion, configuring the hybrid image sensor into a hybrid mode in which a first subset of the plurality of pixel circuits are operated in an event vision sensor (EVS) configuration to capture contrast information, and a second subset of the plurality of pixel circuits are operated in a CMOS image sensor (CIS) configuration to capture intensity information.

Inventors

  • SUESS, ANDREAS
  • TSAU, GUANNHO

Assignees

  • OmniVision Technologies, Inc.

Dates

Publication Date
20260506
Application Date
20251016

Claims (20)

  1. A method of operating a hybrid image sensor having a pixel circuit array including a plurality of pixel circuits, the method comprising: detecting motion in an external scene; and based at least in part on the detected motion, configuring the hybrid image sensor into a hybrid mode in which a first subset of the plurality of pixel circuits are operated in an event vision sensor (EVS) configuration to capture contrast information and a second subset of the plurality of pixel circuits are operated in a CMOS image sensor (CIS) configuration to capture intensity information.
  2. The method of claim 1, wherein configuring the hybrid image sensor into the hybrid mode includes switching the hybrid image sensor from (i) a CIS-only mode in which all of the plurality of pixel circuits are operated in the CIS configuration to capture intensity information corresponding to the external scene to (ii) the hybrid mode, and wherein switching the hybrid image sensor from the CIS-only mode to the hybrid mode includes reconfiguring the first subset of pixel circuits of the plurality of pixel circuits into the EVS configuration to capture contrast information corresponding to the external scene.
  3. The method of claim 1 or claim 2, wherein detecting the motion in the external scene includes analyzing intensity differences between successive intensity frames of a plurality of intensity frames.
  4. The method of claim 3, further comprising capturing the plurality of intensity frames while the hybrid image sensor is configured in a CIS-only mode in which all of the plurality of pixel circuits are operated in the CIS configuration to capture intensity information.
  5. The method of claim 3, further comprising capturing the plurality of intensity frames while the hybrid image sensor is configured in the hybrid mode.
  6. The method of any of claims 1-5, further comprising controlling, based at least in part on luminance levels in the external scene, a number of pixel circuits of the plurality of pixel circuits included in the first subset.
  7. The method of claim 6, wherein controlling the number of pixel circuits included in the first subset includes controlling the number of pixel circuits included in the first subset such that the number of pixel circuits included in the first subset is inversely related to the luminance levels in the external scene.
  8. The method of claim 6 or claim 7, further comprising determining the luminance levels in the external scene.
  9. The method of claim 8, wherein determining the luminance levels includes determining the luminance levels based at least in part on an intensity frame corresponding to the external scene.
  10. The method of claim 8 or claim 9, wherein determining the luminance levels includes determining the luminance levels based at least in part on measurements captured by an ambient light sensor.
  11. The method of any of claims 6-10, wherein controlling the number of pixel circuits included in the first subset includes configuring, based at least in part on the luminance levels, 50% of the pixel circuits of the plurality of pixel circuits into the EVS configuration and another 50% of the pixel circuits of the plurality of pixel circuits into the CIS configuration.
  12. The method of any of claims 6-10, wherein controlling the number of pixel circuits included within the first subset includes configuring, based at least in part on the luminance levels, 12.5% of the pixel circuits of the plurality of pixel circuits into the EVS configuration and 87.5% of the pixel circuits of the plurality of pixel circuits into the CIS configuration.
  13. The method of any of claims 6-10, wherein controlling the number of pixel circuits included within the first subset includes configuring, based at least in part on the luminance levels, 25% of the pixel circuits of the plurality of pixel circuits into the EVS configuration and 75% of the pixel circuits of the plurality of pixel circuits into the CIS configuration.
  14. The method of any of claims 6-10, wherein controlling the number of pixel circuits included within the first subset includes configuring, based at least in part on the luminance levels, 6.25% of the pixel circuits of the plurality of pixel circuits into the EVS configuration and 93.75% of the pixel circuits of the plurality of pixel circuits into the CIS configuration.
  15. The method of any of claims 6-10, wherein controlling the number of pixel circuits included within the first subset includes configuring, based at least in part on the luminance levels, 3.125% of the pixel circuits of the plurality of pixel circuits into the EVS configuration and 96.875% of the pixel circuits of the plurality of pixel circuits into the CIS configuration.
  16. The method of claim 15, wherein configuring 3.125% of the pixel circuits of the plurality of pixel circuits into the EVS configuration and 96.875% of the pixel circuits of the plurality of pixel circuits into the CIS configuration includes using charge photogenerated by a same photosensor of a pixel circuit to simultaneously contribute to both EVS and CIS functionality of the hybrid image sensor.
  17. The method of any of claims 1-16, wherein: detecting the motion in the external scene includes analyzing first intensity differences between first successive intensity frames of a plurality of intensity frames corresponding to the external scene; and the method further comprises- capturing the plurality of intensity frames corresponding to the external scene, wherein capturing the plurality of intensity frames includes capturing the plurality of intensity frames using the hybrid image sensor while the hybrid image sensor is in a CIS-only mode in which all of the plurality of pixel circuits are operated in the CIS configuration to capture intensity information; analyzing second intensity differences between second successive intensity frames of the plurality of intensity frames to determine whether motion is present in the external scene; determining, based at least in part on analyzing the second intensity differences between the second successive intensity frames, that motion is not present in the external scene; and based at least in part on the determination that motion is not present in the external scene, capturing an additional intensity frame corresponding to the external scene while the hybrid image sensor is in the CIS-only mode.
  18. The method of any of claims 1-17, further comprising: capturing one or more intensity frames and contrast information corresponding to the external scene using the hybrid image sensor while the hybrid image sensor is in the hybrid mode; analyzing, while the hybrid image sensor is in the hybrid mode, the contrast information to determine whether motion is present in the external scene; determining, based at least in part on analyzing the contrast information, that motion is present in the external scene; and based at least in part on the determination that motion is present in the external scene, maintaining the hybrid image sensor in the hybrid mode.
  19. The method of claim 18, wherein maintaining the hybrid image sensor in the hybrid mode includes adjusting, based at least in part on second luminance levels in the external scene, a number of pixel circuits of the plurality of pixel circuits included in the first subset.
  20. The method of any of claims 1-17, further comprising: capturing one or more intensity frames and contrast information corresponding to the external scene using the hybrid image sensor while the hybrid image sensor is in the hybrid mode; analyzing, while the hybrid image sensor is in the hybrid mode, the contrast information to determine whether motion is present in the external scene; determining, based at least in part on analyzing the contrast information, that motion is not present in the external scene; and based at least in part on the determination that motion is not present in the external scene, switching the hybrid image sensor from the hybrid mode to a CIS-only mode by reconfiguring the first subset of pixel circuits into the CIS configuration to capture intensity information corresponding to the external scene.

Description

CROSS-REFERENCE TO RELATED APPLICATION(S) This application claims priority to U.S. Provisional Application No. 63/714,458, titled RECONFIGURABLE HYBRID EVENT-BASED VISION SENSOR AND CMOS IMAGE SENSOR, filed October 31, 2024, which is hereby incorporated by reference in its entirety. TECHNICAL FIELD The present disclosure relates to image sensors, and more particularly to a reconfigurable hybrid event-based vision sensor (EVS) and CMOS image sensor (CIS) that can dynamically switch between operating modes based at least in part on motion detection and luminance conditions. For example, several embodiments of the present technology relate to hybrid image sensor systems that can reconfigure pixel functionality between CMOS image sensor operation and event-based vision sensor operation based at least in part on detected motion and ambient lighting conditions. BACKGROUND CMOS image sensors (CIS) operate in response to image light from an external scene being incident upon the image sensor. The image sensor includes an array of pixel circuits having photosensitive elements (e.g., photodiodes) that absorb a portion of the incident image light and generate image charge upon absorption of the image light. The image charge photogenerated by the pixel circuits may be measured as analog output image signals on column bitlines that vary as a function of the incident image light. In other words, the amount of image charge generated is proportional to the intensity of the image light, which is read out as analog image signals from the column bitlines and converted to digital values to provide information that is representative of the external scene. CMOS image sensors (CIS) typically capture complete image frames through synchronized pixel circuit readout operations. By contrast, event-based vision sensors (EVS) represent a specialized class of image sensors that detect and output luminance changes from individual pixel circuits, combined with coordinate and temporal information. These sensors operate asynchronously, with each pixel circuit independently monitoring for luminance changes that exceed preset threshold values. When such changes are detected, the sensor generates event data that can include pixel circuit coordinates, timing information, and/or polarity data. This approach enables high-speed data output with low latency while maintaining reduced power consumption. The asynchronous operation allows multiple pixel circuits to generate events simultaneously, with arbitration circuits managing output order based on earliest-received events. Hybrid sensor architectures combine EVS and CIS capabilities within a single device. Such hybrid image sensors offer the potential to leverage the advantages of both EVS and CIS sensing modalities. For example, hybrid image systems can utilize a first subset of pixel circuits to provide conventional image data and a second subset of pixel circuits to provide event-based information. BRIEF DESCRIPTION OF THE DRAWINGS Non-limiting and non-exhaustive embodiments of the present technology are described below with reference to the following figures, in which like or similar reference characters are used to refer to like or similar components throughout unless otherwise specified. FIG. 1A illustrates a block diagram of a stacked hybrid sensor system configured in accordance with various embodiments of the present technology.FIG. 1B illustrates a block diagram of the stacked hybrid sensor system of FIG. 1A configured in accordance with various embodiments of the present technology.FIG. 2 illustrates a circuit diagram of a pixel circuit with mode switching capability configured in accordance with various embodiments of the present technology.FIGS. 3A-3E illustrate pixel arrangements showing different ratios of CIS and EVS pixels configured in accordance with various embodiments of the present technology.FIG. 4 illustrates a circuit diagram of a multi-subpixel pixel circuit configured in accordance with various embodiments of the present technology.FIG. 5 is a flow diagram illustrating a method of operating an image sensor in accordance with various embodiments of the present technology.FIGS. 6 and 7 illustrate example methods of operating pixel arrangements for a 50% EVS/50% CIS pixel allocation in accordance with various embodiments of the present technology.FIGS. 8-10 illustrate example methods of operating pixel arrangements for a 25% EVS/75% CIS pixel allocation in accordance with various embodiments of the present technology.FIGS. 11 and 12 illustrate example methods of operating pixel arrangements for a 12.5% EVS/87.5% CIS pixel allocation in accordance with various embodiments of the present technology.FIG. 13 illustrates an example method of operating a pixel arrangement for a 6.25% EVS/93.75% CIS pixel allocation in accordance with various embodiments of the present technology.FIGS. 14A-14B illustrate an example method of operating a pixel arrangement for a 3.125% EVS/96.875% C