CN-122027908-A - 360-Degree horizontal panoramic image stitching method for intelligent ship
Abstract
The invention discloses a 360-degree horizontal panoramic image stitching method of an intelligent ship, which comprises the steps of acquiring all sequence frame images corresponding to videos of cameras according to a camera array, constructing a panoramic image hash mapping function according to a set panoramic image width, traversing each pixel point in a corresponding initial stitching panoramic image, carrying out projection processing on each pixel point in a cylindrical coordinate system to acquire a three-dimensional projection coordinate point, projecting an internal reference of the camera to an image pixel coordinate system to acquire an image pixel point, confirming a pixel contribution weight of the camera in an image area in the corresponding panoramic image based on the image pixel point, and inserting hash key value pairs according to the pixel contribution weight and the three-dimensional projection coordinate point in combination with the panoramic image hash mapping function to fill panoramic image pixels to acquire the horizontal panoramic image of the corresponding sequence frame.
Inventors
- YIN YONG
- GAO ZHENGLI
- Jing qianfeng
- SHEN HELONG
Assignees
- 大连海事大学
- 大连海大智龙科技有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260113
Claims (8)
- 1. The intelligent ship 360-degree horizontal panoramic image stitching method is characterized by comprising the following steps of: S1, arranging a plurality of cameras of the same type and arranging the cameras in a regular polygon, wherein the extension lines of the Z-axis negative directions of the cameras in the camera array are intersected at one point, and the horizontal view angles of any two adjacent cameras are preset with overlapping view fields; s2, acquiring all sequence frame images corresponding to each camera video according to the camera array; S3, acquiring a second external parameter matrix of each camera relative to a given center camera based on the camera array according to internal parameter of the camera and a preset first external parameter matrix, wherein the internal parameter comprises an internal parameter matrix, a camera resolution and a camera distortion coefficient matrix; S4, acquiring a camera vertical field angle according to the internal reference matrix and the camera resolution, acquiring a panorama height according to the camera vertical field angle based on a given panorama image width to determine the size of the panorama, and acquiring an initial spliced panorama at a certain moment according to a sequence frame image at the moment; the initial stitching panoramic image is an image obtained by stitching sequence frame images acquired by each camera according to all sequence frame images according to a preset overlapping view; S5, establishing a panorama hash mapping function according to the size of the panorama; S6, traversing each pixel point in the corresponding initial spliced panoramic image, and carrying out projection processing on each pixel point in a cylindrical coordinate system to obtain a three-dimensional projection coordinate point; s7, according to the three-dimensional projection coordinate points and the second extrinsic matrix, acquiring contribution values of all cameras to the three-dimensional projection coordinate points, and determining pixel contribution cameras of the three-dimensional projection coordinate points according to the contribution values; S8, based on the pixel contribution camera determined in the S7, converting the three-dimensional projection coordinate point into a preset local coordinate system of the pixel contribution camera according to the internal reference matrix of the camera and the camera distortion coefficient matrix in combination with the second external reference matrix, and then projecting the three-dimensional projection coordinate point into two-dimensional image pixel coordinates to obtain image pixel points; S9, confirming pixel contribution weights of cameras in image areas in corresponding panoramic images based on image pixel points, inserting hash key value pairs according to the pixel contribution weights and three-dimensional projection coordinate points and combining a panoramic image hash mapping function to fill panoramic image pixels, and obtaining 360-degree horizontal panoramic images of corresponding sequence frames; S10, repeatedly executing S6 to S9 on each frame of sequence frame image to obtain 360-degree horizontal panoramic images of all sequence frame images, and further encoding all sequence frames corresponding to the 360-degree horizontal panoramic images to obtain the optimized horizontal panoramic video.
- 2. The intelligent ship 360-degree horizontal panoramic image stitching method according to claim 1, wherein the second extrinsic matrix in S3 The acquisition formula of (1) is: wherein: Representing a rotation matrix of the ith camera relative to the given center camera; representing a rotation matrix of the ith camera; representing a rotation matrix for a given center camera; Representing a translation vector of the ith camera relative to the given center camera; A translation vector representing a given center camera; representing the translation vector of the ith camera.
- 3. The intelligent ship 360-degree horizontal panoramic image stitching method according to claim 2, wherein the step S4 specifically comprises the steps of: S41, obtaining a camera vertical field angle according to the internal reference matrix and the camera resolution, wherein the expression is as follows: wherein: Representing a camera vertical field angle; Representing camera vertical resolution; Representing the focal length of the pixels of the camera in the Y-axis direction in the reference matrix; S42, acquiring a panoramic image height according to a vertical view angle of a camera based on a given panoramic image width to determine the size of the panoramic image, and acquiring an initial spliced panoramic image according to a sequence frame image at a certain moment, wherein the initial spliced panoramic image is an image obtained by splicing sequence frame images acquired by all cameras according to a preset overlapped view field and all sequence frame images; And the panorama height obtaining formula is: wherein: representing a given panoramic image width; Representing the panorama height.
- 4. A method for stitching 360 ° horizontal panoramic images of an intelligent ship according to claim 3, wherein said S5 specifically comprises the steps of: S51, defining each pixel point in the panorama as according to the panorama with the determined size And is also provided with Wherein Representing the abscissa of the pixel point; s52, taking pixel point coordinates in the panorama as keys Taking camera information associated with pixel points in the panorama as a value To construct hash mapped key value pairs; and the keys of the key-value pairs are: The key value pair has the following values: Wherein: Represents a set of integers; Representing two adjacent camera IDs; Representation camera Pixel coordinate points in the acquired sequence frame images; Representation camera Contribution weights to be determined on panoramic pixel coordinate points satisfy the following conditions And is also provided with ; S53, constructing a panorama hash mapping function according to the key value pairs The expression is: 。
- 5. the intelligent ship 360-degree horizontal panoramic image stitching method according to claim 4, wherein the three-dimensional projection coordinate point in S6 is The acquisition formula of (1) is: wherein: Representing three-dimensional projected coordinate points Corresponding three-dimensional coordinates; Representing a preset cylinder radius of the cylinder projection; Representing the horizontal azimuth of the pixel point; Representing the vertical elevation angle of the pixel point; representing a scaling factor for adjusting the projection scale in the vertical direction and 。
- 6. The intelligent ship 360-degree horizontal panoramic image stitching method according to claim 5, wherein the step S7 specifically comprises the steps of: S71, according to the three-dimensional projection coordinate points and the second extrinsic matrix, the contribution value of each camera to the three-dimensional projection coordinate points is obtained, and the expression is as follows: wherein: representing a contribution value of an ith camera to a three-dimensional projection coordinate point; s72, determining the number of pixel contribution cameras of the three-dimensional projection coordinate point according to the contribution value, wherein the method specifically comprises the following steps: determining a dominant contribution camera from contribution values The expression is: at the same time, whether there is a secondary contribution camera is synchronously confirmed The corresponding discrimination formula is as follows: wherein: representing a set of secondary contribution cameras; representing a secondary contribution camera ID; Representing a collection Make (C) A secondary contribution camera of value maximum; Representing a cosine value corresponding to the main contribution camera, namely a pixel contribution value; Representing the cosine value corresponding to the i-th camera.
- 7. The intelligent ship 360-degree horizontal panoramic image stitching method according to claim 6, wherein the image pixel point obtaining formula in S8 is: wherein: Representing a transpose; , Representing a rotation matrix and a translation vector of the pixel contribution camera relative to a given center camera; Representing three-dimensional projection coordinate points to coordinate points in a preset local coordinate system of the pixel contribution camera; an internal reference matrix representing an ith camera; Representing a distortion coefficient matrix of the ith camera; representing image pixels; Representation of Is used for the standard projection function of (a).
- 8. The intelligent ship 360-degree horizontal panoramic image stitching method according to claim 7, wherein the step S9 specifically comprises the following steps: S91, confirming pixel contribution weights of cameras corresponding to three-dimensional projection coordinate points based on image pixel points; the method for confirming the pixel contribution weight comprises the following steps: When (when) When the number of the pixel contribution cameras of the three-dimensional projection coordinate point is confirmed to be contributed by the single camera, the pixel contribution weight is determined , ; When (when) When the method is used, confirming that the number of the pixel contribution cameras of the three-dimensional projection coordinate point is determined by two paths of camera contribution pixels, confirming the image areas of the two paths of camera contribution pixels based on the preset overlapping vision between the adjacent cameras in S1, and defining the adjacent camera pairs as And the image area width is The horizontal position of the center point of the image area is For each pixel point in the image area Calculating pixel points Relative offset coefficient with respect to center point of image area The method comprises the following steps: According to relative offset coefficient Acquiring adjacent camera pairs The pixel contribution weights of (1) are: , S92, based on the pixel contribution weight, acquiring a latest hash mapping function according to the three-dimensional projection coordinate point and the panorama hash mapping function, wherein the latest hash mapping function is as follows: wherein: Representing panoramic pixel points At the main contribution camera Corresponding pixel coordinates of (a); Representation of In shorthand form; Representing panoramic pixel points Secondary contribution camera Corresponding pixel coordinates of (a); Representation of In shorthand form; s93, setting a width and a height as respectively And (3) with Is set as a size according to the target empty pixel panorama Initializing RGB channel values of all pixel points in the two-dimensional image matrix to a preset background value The expression is: S94, filling panorama pixels in the target empty pixel panorama according to the latest hash mapping function, and obtaining a 360-degree horizontal panorama of a corresponding sequence frame; and the method for filling the panorama pixels in the target empty pixel panorama comprises the following steps: for each panorama pixel point in the target empty pixel panorama, obtaining corresponding tuple data of the panorama pixel point through the latest hash mapping function: If (1) Then If (1) Then And further realizing the pixel filling of the target empty pixel panorama.
Description
360-Degree horizontal panoramic image stitching method for intelligent ship Technical Field The invention relates to the field of intelligent ship navigation and navigation, in particular to a 360-degree horizontal panoramic image splicing method of an intelligent ship. Background In the navigation process of a ship, a driver and a shore-based monitoring person need to carry out long-distance and high-resolution observation on the sea surface. However, most of the existing ship video monitoring systems are built by pictures of independent cameras, so that when a driver observes multiple paths of videos, the spatial mapping correlation is poor, and the overall spatial relationship between the ship and the surrounding environment is difficult to quickly build in the brain. The single-body pan-tilt camera can rotate, but has the limitation of 'looking left and not right', and cannot master all-round dynamics at the same time. The fish-eye looking around system is mostly used for close berthing, and the identification requirement of a sea surface remote target (such as a sea vessel, a buoy) is difficult to meet due to serious edge distortion and limited effective pixels. Therefore, seamless splicing of high-resolution panoramic images is realized, a real-time, continuous and visual environment picture is provided, and the method is one of the keys for improving safe navigation of intelligent ships. And after the panoramic video is obtained, the panoramic video stream is transmitted to a shore-based control center in real time with low delay, so that a shore-based operator obtains a global view almost consistent with a ship driver, and immersive scene support is provided for remote control navigation, emergency command in distress, remote training of crews and the like, which is also one of the technologies necessary for the development of the current intelligent ships. The 360-degree horizontal panoramic image stitching method based on the multi-camera array is a direct and efficient technical path for achieving the object. At present, panoramic image generation technologies mainly can be divided into three types, namely a first type which adopts a special optical device, such as an integral panoramic camera or a catadioptric lens panoramic imaging device, and realizes panoramic imaging through integrating a plurality of cameras or utilizing specular reflection, a second type which is based on single-camera rotary scanning shooting, and generates a planar panoramic image through continuously collecting an image sequence with an overlapping area and then performing image registration, alignment and fusion treatment, and a third type which relies on synchronous collection of a multi-camera array to construct a panoramic imaging system and synthesizes a panoramic view through a subsequent splicing algorithm. In a multi-camera array panoramic imaging system, a stitching method is mainly divided into an algorithm based on feature detection and matching and a method for performing geometric correction and projection fusion based on camera parameters calibrated in advance. However, although the above-described techniques have been applied in certain scenarios, there are significant limitations in the complex maritime environment faced by smart vessels: (1) Although the integral panoramic camera and the catadioptric equipment can realize panoramic coverage, the nominal resolution of the integral panoramic camera and the catadioptric equipment corresponds to the whole 360-degree panoramic picture, the actual effective resolution in any direction is lower, and the detail loss of the picture is serious after stretching. Offshore observations often require identification of objects outside and inside the sea, and are demanding for local high resolution, such devices are difficult to meet the requirement of remote accurate observation. In addition, the catadioptric equipment is high in cost and strong in specialization, and is not beneficial to large-scale ship deployment. (2) The single-camera rotary scanning mode is suitable for static or slowly-changing scenes, in a dynamic and strong offshore environment, targets such as ships and waves continuously move, so that scene changes among adjacent frames are obvious, splicing dislocation, double images and information loss are easy to cause, and real-time and coherent panoramic vision support cannot be provided. (3) The splicing method based on feature point detection and matching relies on abundant and stable feature points in the image, however, the far-sea scene is often sparse in texture and few in feature points, and interference factors such as sea surface fluctuation, illumination change and the like further influence the stability and matching precision of feature extraction. Meanwhile, the algorithm has high computational complexity, is difficult to meet the requirements of real-time video splicing and low-delay transmission, and limits the application of the algorit