US-20260127765-A1 - SYSTEM AND METHOD FOR DETERMINING POSITION OF A MOVING OBJECT
Abstract
An image of an object is received, the image being captured by a camera. Two-dimensional (2D) image points on perimeters of the object in the image are determined. Using a rotation component of a homography matrix, the 2D image points are converted into corresponding three-dimensional (3D) points on a 3D conic section that passes through a center of the camera and the perimeters of the object. The 3D points on the 3D conic section are normalized. A principal direction to a center of the object is determined, based on the normalized 3D points on the 3D conic section. 2D object center is determined based on the principal direction and the rotation component of the homography matrix.
Inventors
- Evgeny Lipunov
- Baglan Aitu
- Osman Murat TEKET
- Batuhan Okur
Assignees
- Rapsodo Pte. Ltd.
Dates
- Publication Date
- 20260507
- Application Date
- 20251230
Claims (20)
- 1 . An apparatus comprising: at least one memory device; and at least one processor coupled with the memory device, the at least one processor is configured to: receive an image of an object, the image being captured by a camera; determine two-dimensional (2D) image points on perimeters of the object in the image; convert using a rotation component of a homography matrix, the 2D image points into corresponding three-dimensional (3D) points on a 3D conic section that passes through a center of the camera and the perimeters of the object; normalize the 3D points on the 3D conic section; determine a principal direction to a center of the object, based on the normalized 3D points on the 3D conic section; and determine 2D object center based on the principal direction and the rotation component of the homography matrix.
- 2 . The apparatus of claim 1 , wherein the at least one processor is further configured to: determine an angle (θ) at each of the 3D points, based on the normalized 3D points on the 3D conic section and the principal direction to the center of the object; and determine an angular size of the object, based on the angle (θ) at each of the 3D points, the angular size defining an angle formed between the 3D points in respect to the center of the camera.
- 3 . The apparatus of claim 2 , wherein the at least one processor is further configured to: determine a range of the object representing a length of a radius vector from the center of the camera to the object, based on the angular size.
- 4 . The apparatus of claim 3 , wherein the at least one processor is further configured to determine a 3D position of the object, based on the range of the object, the principal direction to the center of the object and a position of the camera.
- 5 . The apparatus of claim 3 , wherein the range of the object is determined by dividing a diameter of the object by the angular size.
- 6 . The apparatus of claim 1 , wherein the camera has been calibrated without decomposing intrinsic and extrinsic camera parameters explicitly.
- 7 . The apparatus of claim 1 , wherein the object is a ball.
- 8 . The apparatus of claim 1 , wherein the object is a golf ball.
- 9 . The apparatus of claim 1 , wherein the object is a baseball.
- 10 . The apparatus of claim 1 , wherein the object is a cricket ball.
- 11 . A method comprising: receiving an image of an object, the image being captured by a camera; determining two-dimensional (2D) image points on perimeters of the object in the image; converting using a rotation component of a homography matrix, the 2D image points into corresponding three-dimensional (3D) points on a 3D conic section that passes through a center of the camera and the perimeters of the object; normalizing the 3D points on the 3D conic section; determining a principal direction to a center of the object, based on the normalized 3D points on the 3D conic section; and determining 2D object center based on the principal direction and the rotation component of the homography matrix.
- 12 . The method of claim 11 , further comprising: determining an angle (θ) at each of the 3D points, based on the normalized 3D points on the 3D conic section and the principal direction to the center of the object; and determining an angular size of the object, based on the angle (θ) at each of the 3D points, the angular size defining an angle formed between the 3D points in respect to the center of the camera.
- 13 . The method of claim 12 , further including determining a range of the object representing a length of a radius vector from a center of the camera to the object, based on the angular size.
- 14 . The method of claim 13 , further including determining a 3D position of the object, based on the range of the object, the principal direction to the center of the object and a position of the camera.
- 15 . The method of claim 11 , wherein the camera has been calibrated without decomposition into explicit intrinsic and extrinsic camera parameters.
- 16 . The method of claim 11 , wherein the object is a ball.
- 17 . The method of claim 11 , wherein the object is a golf ball.
- 18 . The method of claim 11 , wherein the object is a baseball.
- 19 . The method of claim 11 , wherein the object is a cricket ball.
- 20 . A computer readable storage medium storing a program of instructions executable by a machine to perform a method of: receiving an image of an object, the image being captured by a camera; determining two-dimensional (2D) image points on perimeters of the object in the image; converting using a rotation component of a homography matrix, the 2D image points into corresponding three-dimensional (3D) points on a 3D conic section that passes through a center of the camera and the perimeters of the object; normalizing 3D points on the 3D conic section; determining a principal direction to a center of the object, based on the normalized 3D points on the 3D conic section; and determining 2D object center based on the principal direction and the rotation component of the homography matrix.
Description
CROSS REFERENCE TO RELATED APPLICATIONS This application is a continuation-in-part application of U.S. patent application Ser. No. 18/802,634 filed Aug. 13, 2024, which is a divisional application of U.S. patent application Ser. No. 18/595,592, filed Mar. 5, 2024 (now U.S. patent Ser. No. 12/118,750), the entire contents of which are incorporated by reference herein. TECHNICAL FIELD The present disclosure relates to a camera calibration method and a method and system of using a calibrated camera(s) for tracking and measuring a motion of a target object in a three-dimensional space. BACKGROUND Fundamentally, a camera provides an image mapping of a three-dimensional space onto a two-dimensional space or image plane. Current camera calibration techniques supply model parameter values that are needed to compute line of sight rays in space that corresponds to a point in the image plane. A calibration or “projection” matrix, which is estimated during a camera calibration, is typically decomposed into eleven geometric parameters that define the standard pinhole camera model. Typically, camera model parameters include extrinsic and intrinsic parameters. The extrinsic camera parameters include 3D location and orientation of a camera in the world and intrinsic camera parameters include, among others, a focal length and relationships between pixel coordinates and camera coordinates. In many applications, camera calibration is necessary to recover 3D quantitative measures about an observed scene from 2D images. For example, from a calibrated camera, it can be determined how far an object is from the camera, or the height of the object, etc. Typical calibration techniques use a 3D, 2D or 1D calibration object whose geometry in 3D space is known with very good precision. From a set of world points and their image coordinates, one object of the camera calibration is to find a projection “matrix” and subsequently find intrinsic and extrinsic camera parameters from that matrix in a decomposition step. However, the decomposition into extrinsic and intrinsic camera parameters is one of the major issues in calibration due to reprojection error. Further, in the decomposition step, to extract the extrinsic and intrinsic camera parameters, several assumptions and constraints are made, which might be not true, for e.g., no lens distortion or no tilt. Accordingly, it is desirable to have a camera calibration method that does not require decomposition into intrinsic and extrinsic camera parameters. In addition, it is also desirable to have a camera system for taking measurements that avoids having to make any assumptions regarding intrinsic or extrinsic camera parameters. SUMMARY There is provided a camera system and method for taking measurements of a moving object that avoids having to make any assumptions regarding intrinsic camera parameters. Further, there is provided a camera system and method for taking measurements of a moving object that avoids having to split camera parameters apart from the camera's calibration matrix and as a result, the camera is ready to work with any lenses, any shift or tilt (intentionally or unintentionally) in the setup of the camera for tracking object in motion. Additionally, there is provided a camera system calibration method for calibrating a camera used in taking measurements of a moving object without the decomposition of camera parameters into extrinsic and intrinsic parts. In an embodiment, the camera system and method include a single camera device. In one embodiment, during the calibration process, a virtual reference is aligned to a physical object in a global reference space to obtain the camera parameters. A robust camera calibration system and method and system whereby a user can use any camera without the need for fine tuning the camera parameters to a global reference. According to one aspect, there is provided a method for tracking an object in motion. The method comprises: capturing, from each of one or more calibrated cameras, one or more image frames of an object in motion, each of the one or more calibrated cameras having been calibrated according to a calibration method that generates and uses a respective transformation matrix for mapping three-dimensional (3D) real world model features to corresponding two-dimensional (2D) image features; and determining, using a hardware processor, motion characteristics of the object in motion based on the captured one or more image frames from each of the one or more calibrated cameras, the determining of motion characteristics based on implicit intrinsic camera parameters and implicit extrinsic camera parameters of the respective transformation matrix from each respective one or more calibrated cameras. In a further aspect, there is provided an object tracking system. The object tracking system includes a camera system comprising one or more calibrated cameras, each camera capturing one or more image frames of a position of an object in