JP-7856422-B2 - Image processing device, imaging device, image processing method, program, recording medium
Inventors
- 池田 平
Assignees
- キヤノン株式会社
Dates
- Publication Date
- 20260511
- Application Date
- 20211221
Claims (14)
- A means for combining regions extracted from each of multiple images, The system includes an acquisition means for acquiring distance information of a subject from at least a portion of the plurality of images, The synthesis means determines the width of the region to be cut out from at least some of the images among the plurality of images, based on the distance information of the image acquired by the acquisition means. An image processing apparatus characterized in that, when the width of the region to be cut out from a first image having first distance information is a first width, the second width, which is the width of the region to be cut out from a second image having second distance information that is closer than the first distance information, is narrower than the first width .
- The image processing apparatus according to claim 1 , wherein the synthesis means performs synthesis using a further cropped region for an image among the plurality of images in which different distance information exists within the region used for synthesis.
- The image processing apparatus according to claim 2, characterized in that the synthesis means is such that, among the plurality of images, the number of times the first cropping is performed on a first portion having a third distance information within the region of an image in which different distance information exists within the region used for synthesis is less than the number of times the second cropping is performed on a second portion having a fourth distance information that is closer than the third distance information.
- The synthesis means crops a third image from among the plurality of images that has distance information that is further than a predetermined distance by a third width, The image processing apparatus according to any one of claims 1 to 3 , characterized in that it crops a fourth image having distance information that is closer than the predetermined distance with a fourth width narrower than the third width.
- The image processing apparatus according to claim 4 , characterized in that the fourth width is an integer fraction of the third width.
- An imaging means for capturing multiple images, A means for combining regions extracted from each of multiple images, The system includes an acquisition means for acquiring distance information of a subject from at least a portion of the plurality of images, The synthesis means determines the width of the region to be cut out from at least some of the images among the plurality of images, based on the distance information of the image acquired by the acquisition means. The synthesis means crops a region of the first image with a first width from among the plurality of images, where the distance information is farther than a predetermined distance. An imaging device characterized by cropping a region of a second image with distance information that is closer than the predetermined distance, using a second width that is narrower than the first width .
- The imaging device according to claim 6 , characterized in that the plurality of images are images captured by panning the imaging device .
- The imaging apparatus according to claim 7 , characterized in that the panning is performed at a constant speed and the intervals between the plurality of images are the same.
- The imaging apparatus according to any one of claims 6 to 8, characterized in that the synthesis means is used for synthesis at integer intervals from the first images.
- The imaging apparatus according to any one of claims 6 to 8, characterized in that the second width is an integer fraction of the first width.
- The synthesis means uses every N images from the first image for the synthesis, The imaging apparatus according to any one of claims 6 to 8, characterized in that the second width is 1/N of the first width.
- A compositing step that combines regions extracted from each of multiple images, The process includes an acquisition step of acquiring distance information of a subject from at least a portion of the plurality of images, In the synthesis step, for at least some of the multiple images, the width of the region to be cut out from the image is determined based on the distance information of the image acquired in the acquisition step . An image processing method characterized in that, when the width of the region to be cut out from a first image having first distance information is a first width, the second width, which is the width of the region to be cut out from a second image having second distance information that is closer than the first distance information, is narrower than the first width .
- A program that causes a computer to perform an image processing method, A compositing step that combines regions extracted from each of multiple images, The procedure involves an acquisition step of obtaining distance information of a subject from at least a portion of the aforementioned plurality of images, In the synthesis step, for at least some of the multiple images, the width of the region to be cut out from the image is determined based on the distance information of the image acquired in the acquisition step. A program characterized in that, when the width of the region to be cut out from a first image having first distance information is a first width, the second width, which is the width of the region to be cut out from a second image having second distance information that is closer than the first distance information, is narrower than the first width .
- A storage medium readable by a computer that stores the program described in claim 13.
Description
This invention relates to an image processing apparatus, and more particularly to an image processing apparatus that extracts and synthesizes portions from multiple images. Conventionally, a method has been proposed to generate a panoramic image by continuously capturing still images while panning the imaging device and then combining the continuously captured still images (Patent Document 1). Japanese Patent Publication No. 2010-28764 This is a block diagram showing the configuration of an image processing apparatus in an embodiment of the present invention.This figure illustrates the panning direction and image cropping region in an embodiment of the present invention.This diagram illustrates the processing flow of panoramic imaging in the first embodiment and its correspondence with the image data.This is a flowchart illustrating the operation of panoramic imaging in the first embodiment.This diagram illustrates the processing flow of panoramic imaging in the second embodiment and its correspondence with the image data.This is a flowchart illustrating the operation of panoramic imaging in the second embodiment. The following describes preferred embodiments of the present invention with reference to the attached drawings. In each drawing, identical members or elements are given the same reference numeral, and redundant explanations are omitted or simplified. Furthermore, the following description will focus on an example where the image processing device is applied to a digital (still) camera. However, the image processing device can also include other electronic devices such as movie cameras, smartphones with cameras, tablet computers with cameras, in-vehicle cameras, and network cameras. Additionally, the image processing device in the following embodiments may be a computer capable of processing images captured by another device. (First embodiment) Figure 1 is a block diagram showing the configuration of an image processing apparatus in a first embodiment of the present invention. As shown in Figure 1, the image processing device of this embodiment mainly consists of a digital camera 100 and a detachable interchangeable lens unit 300 for the digital camera 100. In the digital camera 100, the shutter 20 controls the incidence time (exposure time) of the optical image to the imaging unit 22, which will be described later. The imaging unit 22 includes an image sensor composed of a CCD or CMOS element, which converts the optical image into an electrical signal. The imaging unit 22 functions as an imaging means for capturing an image of a subject. The imaging unit 22 also incorporates an A/D conversion processing function and an AF evaluation value detection unit 23. The AF evaluation value detection unit 23 calculates an AF evaluation value based on contrast information obtained from the digital image signal and phase difference obtained from the parallax image, and outputs it to the system control unit 50. The image processing unit 24 performs resizing and color conversion processing, such as predetermined pixel interpolation and reduction, on the image data output from the imaging unit 22 or the image data from the memory control unit 15. Furthermore, the image processing unit 24 can acquire distance information of the subject based on the AF evaluation value. That is, by detecting the phase difference from the two input parallax images, the distance to the subject is obtained, and distance information from the imaging unit to the subject can be acquired pixel by pixel. A sensor such as a TOF (Time of Flight) sensor may be used to acquire distance information. Furthermore, the image processing unit 24 acquires exposure control information by performing predetermined calculations using the captured image data. Based on this distance information and exposure control information, the system control unit 50 performs exposure control and focus adjustment control. This then enables TTL (Through-the-Lens) type AE (Automatic Exposure) processing, EF (Automatic Flash Emission) processing, etc. Furthermore, the image processing unit 24 performs autofocus (AF) processing based on distance information, using the output of the AF evaluation value detection unit 23 located in the imaging unit 22. The image processing unit 24 also performs TTL-type auto white balance (AWB) processing by performing predetermined calculations using the captured image data. Additionally, the image processing unit 24 synthesizes multiple images obtained from multiple imaging cycles to generate a panoramic image. The detection unit 26 includes a gyro sensor and other sensors to acquire angular velocity information, attitude information, etc., from the digital camera 100. The angular velocity information includes information on angular velocity and angular acceleration during panning imaging by the digital camera 100. The attitude information includes information such as the tilt of the digital camera 100 relative to the horizontal direction