Search

KR-20260064442-A - METHOD, APPARATUS, AND SYSTEM FOR CONSOLIDATED VIEWING FROM MULTIPLE CAMERAS

KR20260064442AKR 20260064442 AKR20260064442 AKR 20260064442AKR-20260064442-A

Abstract

A method for displaying a combined view including a quasi-3D view of at least a portion of an object of interest along with a comprehensive 2D view of a multi-camera coverage area is disclosed. This is achieved by performing image processing on video feeds from multiple cameras to provide a multi-camera viewing experience integrated on a single display window or screen, which provides greater user efficiency and comprehensive intelligence regarding the object of interest compared to a system without a processor and network interface.

Inventors

  • 스위니, 엠. 제프리

Assignees

  • 한화비전 주식회사

Dates

Publication Date
20260507
Application Date
20250317
Priority Date
20241031

Claims (20)

  1. In terms of method, A step of coordinating the operation of one or more of a plurality of surveillance cameras, wherein each camera has its own field of view (FOV) and is operatively connected to one another through a network connection, and the plurality of surveillance cameras includes at least one camera designated as a main camera and at least one other camera designated as an auxiliary camera, and the main camera and the auxiliary camera provide a video feed related to a multi-camera coverage area; A step of integrating at least some of a plurality of video feeds from a plurality of surveillance cameras, including a main camera and an auxiliary camera, to obtain an integrated image by utilizing video analysis of a plurality of surveillance cameras for tracking or monitoring an object of interest; and A method characterized by including a display step of optionally displaying one of an integrated view comprising a comprehensive 2D view of a multi-camera coverage area along with a pseudo-3D view of at least a portion of an object of interest displayed within a non-integrated view or a single bounding box, wherein the pseudo-3D view is based on the integrated image.
  2. In paragraph 1, A method characterized by designating cameras as primary or secondary cameras being dynamically performed or changed according to the characteristics or movements of an object of interest being tracked or monitored in a multi-camera coverage area.
  3. In paragraph 2, A method characterized by the above integration being performed using image timing information and image size information from multiple surveillance cameras to obtain an integrated image.
  4. In paragraph 3, A method characterized in that the single bounding box is graphically inserted into a comprehensive 2D view, placed as a layer on top of it, or mapped in another way so that one or more camera angles are displayed within the single bounding box.
  5. In paragraph 4, A method characterized by the fact that the above pseudo-3D view is provided without using image processing related to three-dimensional (3D) rendering.
  6. In paragraph 5, A method characterized by the display of the above-mentioned integrated view excluding the need for toggling or switching between different windows or screens, thereby providing an enhanced multi-camera viewing experience.
  7. In the device, A memory containing information related to video feeds from multiple surveillance cameras for a multi-camera coverage area, each having its own field of view (FOV) and operatively connected to one another via a network connection; An interface that cooperates with the plurality of surveillance cameras to receive a video feed from them and transmits a control signal to the plurality of surveillance cameras; An image processing module operatively connected to the memory and the interface, and performing image processing on video feeds from the plurality of surveillance cameras to cause the integration of at least some of the plurality of video feeds to obtain an integrated image using video analysis of the plurality of surveillance cameras for tracking or monitoring an object of interest; and A device characterized by comprising a controller that provides an enhanced multi-camera viewing experience by operatively connecting the display, the memory, the interface, and the image processing module, providing control signals to the plurality of surveillance cameras, and providing instructions to the image processing module and the display to selectively display a combined view including a pseudo-3D view of at least a portion of an object of interest along with a separate view of each of two or more fields of view (FOV) from two or more cameras or a comprehensive 2D view of a multi-camera coverage area.
  8. In Paragraph 7, A device characterized in that a control signal from the controller includes an instruction to designate at least one camera among a plurality of surveillance cameras as a main camera and at least one other camera as an auxiliary camera, and the main camera and the auxiliary camera provide a video feed related to a multi-camera coverage area.
  9. In paragraph 8, A device characterized by a control signal from the above controller that designates the cameras as main cameras or auxiliary cameras, which is performed or changed dynamically according to the characteristics or movements of an object of interest being tracked or monitored in a multi-camera coverage area.
  10. In Paragraph 9, A device characterized by the above controller cooperating with the image processing module and the display to generate the pseudo-3D view without using image processing related to three-dimensional (3D) rendering.
  11. In Paragraph 10, A device characterized in that integration by the above image processing module is performed using image timing information and image size information from a plurality of surveillance cameras to obtain an integrated image.
  12. In Paragraph 11, A device characterized by the above controller providing instructions to the image processing module and the display such that at least a portion of an object of interest, which is a pseudo-3D view based on the integrated images, is displayed within a single bounding box.
  13. In Paragraph 11, A device characterized in that at least one of the above controller and the above image processing module operates to cause a visual indicator related to a change in the perceived risk state of an object of interest, based on video analysis information from an auxiliary camera for enhancing intelligence related to one or more images from a main view used to determine a change in the perceived risk state that causes a visual indicator.
  14. In the system, A network interface that cooperates with multiple surveillance cameras to receive video feeds therefrom and transmits control signals thereto, and obtains information about a main view including one or more images of an object of interest in a coverage area from at least one main camera, and obtains information about one or more auxiliary views including one or more images of an object of interest in a coverage area from one or more auxiliary cameras, having a field of view different from the field of view of the main view; and A system characterized by including a processor that performs image processing on the video feed to provide a multi-camera viewing experience integrated on a single display window or screen, which provides greater user efficiency and comprehensive intelligence regarding the subject of interest compared to a system without a processor and network interface.
  15. In Paragraph 14, A system characterized by further including a video analysis module that is operatively connected to at least one of the processor and one of the network interfaces and applies at least one of video analysis, artificial intelligence (AI), cloud server processing, and image rendering to the video feed to provide the integrated multi-camera viewing experience.
  16. In paragraph 15, A system characterized by the above-described video analysis module performing processing to enhance the main view by adding or supplementing information from an auxiliary view without the need to monitor separate views in multiple windows or screens.
  17. In Paragraph 16, A system characterized in that the processor and the video analysis module cooperate to provide pseudo-3D viewing without using image processing related to three-dimensional (3D) rendering.
  18. In Paragraph 17, A system characterized in that the processor and the video analysis module cooperate to display a visual indicator based on a change in the perceived risk state of an object of interest, which is based on the use of an auxiliary camera to enhance one or more images from a main view used to determine the change in the perceived risk state that causes the display of the visual indicator.
  19. In Paragraph 18, A system characterized in that the processor and the video analysis module cooperate to display the visual indicator in the form of a bounding box graphic depiction that provides a visible indication of a harmful object or behavior related to the object of interest in the main view.
  20. In Paragraph 19, A system characterized in that the processor and the video analysis module also cooperate to provide the ability to switch between the integrated multi-camera viewing experience and the traditional multi-screen view through multiple windows or screens.

Description

Method, apparatus, and system for consolidated viewing from multiple cameras The present invention relates to image processing that recognizes a person or object in an image of a coverage area captured by one or more cameras, and performs image data processing thereon to implement integrated viewing. As an example of related technology, there is Korean Patent No. 10-2339825 titled "Device for Context Awareness and Image Stitching Method Thereof." Its English abstract partially states, "The present invention provides a stitching-based device for context awareness and an image stitching method thereof capable of generating a panoramic image that reflects information from an observer's viewpoint when stitching images captured by a plurality of cameras having different viewing angles." As another example of related technology, there is U.S. Patent No. 10979645 titled “Video recording device including cameras and video recording system including the same.” The abstract states, “The video recording device may include one or more fixed cameras and a video calibrator connected to a PTZ camera. The video calibrator receives a first image captured by one or more fixed cameras and a second image obtained by moving the PTZ camera to capture an image, searches for an image region that matches the first image within a designated reference window in the second image according to a default value related to the aiming direction of the fixed camera, and may output information related to the searched image region.” The above-mentioned Korean Patent No. 10-2339825 and U.S. Patent No. 10979645 are both owned by the applicant, and the entire contents of these disclosures are incorporated herein by reference and constitute part of the present disclosure. In these related technologies, specific improvements or enhancements may be required depending on the location and manner in which monitoring systems and methods are implemented. The attached drawings, together with the detailed description of the invention, constitute part of the specification and help to illustrate various embodiments of the disclosure and explain specific principles and effects. In this context, the same reference numerals denote identical or functionally similar components in separate drawings. FIG. 1 is a conceptual diagram showing the relationship between specific exemplary hardware and software elements applicable to one or more embodiments of the present disclosure. FIG. 2 is a conceptual diagram illustrating some basic aspects related to one or more embodiments of the present disclosure. Figure 3 is a conceptual diagram illustrating some aspects related to the development of the 2D to 3D viewing experience for users. FIG. 4 shows an exemplary implementation of a first step according to one or more embodiments of the present disclosure. FIG. 5 shows an exemplary implementation of a second step according to one or more embodiments of the present disclosure. FIG. 6 shows an exemplary implementation of a third step according to one or more embodiments of the present disclosure. FIG. 7 illustrates a first embodiment of the present disclosure. FIG. 8 illustrates a second embodiment of the present disclosure. FIG. 9 illustrates a third embodiment of the present disclosure. Those skilled in the art will understand that some components in the drawings are depicted for simplification and clarity and are not necessarily drawn to actual scale. The dimensions of some components in the drawings may be exaggerated compared to others to aid in understanding the embodiments of the present disclosure. Before describing the embodiments of the present invention in detail, it should be understood that the features of the present invention described herein are not limited to the structure or arrangement of components or details of method steps illustrated in the following description or drawings in their application. The features of the present invention may be embodied in other embodiments and may be implemented in various ways. Furthermore, the terms and expressions used herein are for illustrative purposes only and should not be interpreted in a restrictive sense. The terms “comprising,” “comprising,” or “having,” and variations thereof, mean including additional items as well as the items listed below and their equivalents. Unless otherwise specified or limited, terms such as “mounted,” “connected,” “supported,” and “coupled,” and variations thereof, are used in a broad sense and include both direct and indirect mounting, connection, support, and coupling. Furthermore, “connected” and “coupled” are not limited to physical or mechanical connections or couplings. The following disclosure is presented to enable those skilled in the art to implement and use the embodiments. Various modifications to the embodiments will be easy for those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications. Accordingly, the features