Search

KR-102962164-B1 - EVENT-BASED VISION SENSOR MANUFACTURED WITH 3D-IC TECHNOLOGY

KR102962164B1KR 102962164 B1KR102962164 B1KR 102962164B1KR-102962164-B1

Abstract

Event-based vision sensors are manufactured using advanced stacking technology known as 3-dimensional integrated circuits, which stack many wafers (or dies) and vertically interconnect them. Subsequently, the sensor's electronic integrated circuits are distributed between two or more electrically connected dies.

Inventors

  • 베르너 라파엘
  • 브라엔들리 크리스챤
  • 자노니 마씨모

Assignees

  • 소니 어드밴스드 비주얼 센싱 아게

Dates

Publication Date
20260508
Application Date
20190308
Priority Date
20180314

Claims (7)

  1. An event-based vision sensor comprising a stacked first die and a second die, A photodiode of each pixel of the pixel array is located on the first die, and An event detector for each pixel of the pixel array, and a photoreceptor circuit that generates a photoreceptor signal according to light intensity and inputs the photoreceptor signal to the event detector are located on the second die, and The above photoreceptor circuit includes a plurality of transistors, and The above event detector includes a memory capacitor, a comparator, and a memory, and The memory capacitor includes a metal-insulator-metal (MIM) capacitor, the first plate of the MIM capacitor is connected to the photoreceptor signal generated by the photoreceptor circuit, the second plate of the MIM capacitor is connected to one input of the comparator, and the other input of the comparator is connected to a threshold signal applied by a controller. The output of the above comparator is stored in the above memory, and The first metal layer in the first die and the second metal layer in the second die are connected to each other in a Cu-Cu connection, and At least a portion of the MIM capacitor, the first metal layer in the first die, and the second metal layer in the second die overlap each other, An event-based vision sensor, wherein a conditional reset circuit is connected between the second plate of the MIM capacitor and one input of the comparator, and the conditional reset circuit is configured to switch conduction states according to a combination of the output of the comparator stored in the memory and a reset signal applied by the controller.
  2. delete
  3. In claim 1, the first die is an event-based vision sensor having a backlight architecture in which the wiring layer is located on the back of the light radiating surface.
  4. delete
  5. An event-based vision sensor according to claim 1, wherein the first die and/or the second die comprises a stacked die, and the stacked die has a through-silicon via connection.
  6. In claim 1, the photoreceptor circuit comprises an n-FET transistor and a p-FET transistor, an event-based vision sensor.
  7. In claim 1, the event-based vision sensor, wherein the photoreceptor circuit comprises two n-FET transistors and a p-FET transistor.

Description

Event-Based Vision Sensor Manufactured with 3D-IC Technology This application claims the benefit of 35 USC 119(e) of U.S. Provisional Application No. 62/642,838 filed on March 14, 2018, and this application is incorporated herein by reference in its entirety. One of the critical parameters in the design of event-based pixel arrays (also referred to as Dynamic Vision Sensors (DVS)) is Quantum Efficiency (QE), which is the ratio between the number of light signals and the number of electrons generated in response to the number of photons in those light signals. This parameter directly depends on the fill factor (FF), which is the ratio between the area of the light-exposed photosensitive device and the entire area of the light-exposed integrated circuit. Since today's event-based vision sensors are implemented using silicon planar processes, the light-exposed area must be shared between photosensitive devices and other semiconductor devices forming pixel circuits. This approach has two major disadvantages: the area of the photosensitive devices is limited, and circuits not intended to be exposed to light are degraded as a result of this radiation exposure. The present invention has the primary objective of mitigating these two problems by using an advanced stacking technique known as a 3-dimensional integrated circuit, which stacks more wafers (or dies) and vertically interconnects them in the manufacture of event-based vision sensors. There are multiple motives including the following: Increase FF; Shielded circuits that do not need to receive light/do not receive light; and Different components of pixels have different requirements, and these requirements can be best met by implementing different components of pixels using different IC processes (photosensitive devices can even theoretically be manufactured using non-silicon-based technologies, e.g., GaAs). Generally, depending on one aspect, the present invention features an event-based vision sensor (EBVS) comprising vertically connected stacked dies. Consequently, the photosensitive devices of each pixel of the pixel array may be located on a die exposed to illumination, and other devices not useful for light capture may be on different wafers or dies. Preferably, for every pixel of the pixel array between the dies, there is at least one connection. Typically, the photodiodes of each pixel of the pixel array are on the first die, and the individual event detectors of each pixel of the pixel array on the second die and the interconnects between the first and the die connect the photodiodes to the individual event detectors. This approach can be used with front-lit architecture or back-lit architecture. In addition, there are a number of different ways in which the photoreceptor circuit of each pixel of the pixel array can be implemented. For example, it can be located on the second die or on the first die, or distributed between the first die and the second die. An additional amplification stage can be added to the first die. Often, n-FETs are used on the first wafer or die, and both n-FET and p-FET transistors are used on the second die. In addition, the transistor characteristics between the transistors on the first die and the second die, including different gate oxide thicknesses or different implants, may be different. Generally, according to one aspect, the present invention features a method for manufacturing an event-based vision sensor. Generally, the method comprises the steps of manufacturing different devices of each pixel of a pixel array on different wafers or dies, and then stacking the wafers or dies. As used herein, a “die” is typically a piece or part of a rectangular semiconductor wafer, such as a chip. Here, such a piece of a semiconductor wafer includes a part of an instance of an integrated circuit device, such as an event-based vision sensor. References to wafers or dies are based on the potential for different manufacturing approaches. Stacking may be performed at the wafer level before dicing into dies. Alternatively, stacking may be performed on individual dies after they have been cut or diced from the wafer. Consequently, the final device resulting from the manufacturing process will be a stack of dies. Next, the present method will include the step of connecting each of the pixels using, for example, Cu-Cu connections. In one embodiment, the method further comprises the steps of manufacturing photodiodes of each pixel of a pixel array on a first wafer or die and manufacturing individual event detectors of each pixel of a pixel array on a second wafer or die. Various novel details of combinations of configurations and parts, and other features of the present invention, including other advantages, will now be described more specifically with reference to the accompanying drawings and pointed out in the claims. It will be understood that specific methods and devices for implementing the present invention are illustrated as exampl