US-12626390-B2 - Position estimation system, position estimation method, and program
Abstract
A position estimation system, a position estimation method, and a program that can estimate a position of a moving body without installing an installation object on the moving body are provided. A position estimation system includes a detection unit configured to generate a first mask image, a mask area masking a moving body in an image photographed being added to the first mask image, a perspective transformation unit configured to perform perspective transformation on the first mask image, and a position calculation unit configured to correct a coordinate point of a first circumscribed rectangle set for a mask area in the first mask image using a second coordinate point of the mask area in the second mask image, which is obtained by performing perspective transformation on the first mask, to calculate a third coordinate point.
Inventors
- Masahiro KEMMOTSU
- Takuro SAWANO
- Yasuyoshi HATANO
- Shogo YASUYAMA
- Kento IWAHORI
- Keigo IKEDA
Assignees
- TOYOTA JIDOSHA KABUSHIKI KAISHA
Dates
- Publication Date
- 20260512
- Application Date
- 20230821
- Priority Date
- 20220930
Claims (9)
- 1 . A position estimation system comprising: a camera configured to photograph an imaging area including a target moving body; and a processor configured to: generate a first mask image by inputting a first image photographed by the camera to a trained machine learning model configured to mask the moving body in the first image, a mask area masking the moving body being added to the first mask image; perform perspective transformation on the first mask image; set as a first coordinate point a first vertex of a first circumscribed rectangle set in the mask area in the first mask image, the first vertex corresponding to a portion of the moving body; set as a second coordinate point a second vertex of a second circumscribed rectangle set in the mask area in a second mask image obtained by performing the perspective transformation on the first mask image, the second vertex corresponding to a same portion of the moving body as the first vertex; and correct the first coordinate point using the second coordinate point to calculate a third coordinate point indicating a position of the moving body in an image coordinate system.
- 2 . The position estimation system according to claim 1 , wherein the processor is further configured to correct distortion of the first image.
- 3 . The position estimation system according to claim 1 , wherein the processor is further configured to rotate the first image so that a vector direction of the moving body is directed to a predetermined direction.
- 4 . The position estimation system according to claim 1 , wherein the processor is further configured to remove from the first image a moved area corresponding to a distance which the moving body has moved and cut out an unmoved area including the moving body when the moving body has moved a distance exceeding a predetermined threshold.
- 5 . The position estimation system according to claim 1 , wherein the processor is further configured to, when the first mask image includes an untargeted moving body in addition to the moving body, remove from the first mask image the untargeted moving body.
- 6 . The position estimation system according to claim 1 , wherein the processor is further configured to calculate the position of the moving body in a global coordinate system by applying a position of the camera in a global coordinate system to correct the third coordinate point.
- 7 . The position estimation system according to claim 1 , wherein the moving body is a vehicle.
- 8 . A position estimation method comprising: photographing, by a camera, an imaging area including a target moving body; generating a first mask image by inputting a first image photographed by the camera to a trained machine learning model configured to mask the moving body in the first image, a mask area masking the moving body being added to the first mask image; performing perspective transformation on the first mask image; setting as a first coordinate point a first vertex of a first circumscribed rectangle set in the mask area in the first mask image, the first vertex corresponding to a portion of the moving body; setting as a second coordinate point a second vertex of a second circumscribed rectangle set in the mask area in a second mask image obtained by performing the perspective transformation on the first mask image, the second vertex corresponding to a same portion of the moving body as the first vertex; and correcting the first coordinate point using the second coordinate point to calculate a third coordinate point indicating a position of the moving body in an image coordinate system.
- 9 . A non-transitory computer readable medium storing a program for causing a computer to execute processing of: generating a first mask image by inputting a first image photographed by a camera to a trained machine learning model configured to mask a moving body in the first image, a mask area masking the moving body being added to the first mask image; performing perspective transformation on the first mask image; setting as a first coordinate point a first vertex of a first circumscribed rectangle set in the mask area in the first mask image, the first vertex corresponding to a portion of the moving body; setting as a second coordinate point a second vertex of a second circumscribed rectangle set in the mask area in a second mask image obtained by performing the perspective transformation on the first mask image, the second vertex corresponding to a same portion of the moving body as the first vertex; and correcting the first coordinate point using the second coordinate point to calculate a third coordinate point indicating a position of the moving body in an image coordinate system.
Description
CROSS REFERENCE TO RELATED APPLICATIONS This application is based upon and claims the benefit of priority from Japanese patent application No. 2022-158932, filed on Sep. 30, 2022, the disclosure of which is incorporated herein in its entirety by reference. BACKGROUND The present disclosure relates to a position estimation system, a position estimation method, and a program. In particular, the present disclosure relates to a position estimation system, a position estimation method, and a program for moving bodies including vehicles. A technique for detecting positions of vehicles is disclosed in, for example, Japanese Unexamined Patent Application Publication No. 2001-51720. Japanese Unexamined Patent Application Publication No. 2001-51720 discloses a vehicle position detection apparatus including transmitting and receiving means installed on two predetermined positions on a vehicle, four returning means installed on at least three fixed points in a parking space of the vehicle, and control means connected to each transmitting and receiving means, each of the transmitting and receiving means communicating with two different returning means to detect a distance between them, and the control means calculating a vehicle position based on the distance detected by each transmitting and receiving means. SUMMARY However, since the technology described in Japanese Unexamined Patent Application Publication No. 2001-51720 requires each of the vehicle and the parking space to have an installation object, there is a problem that it is not possible to estimate a position of a moving body, for example, when a position of a moving body or a space where an installation object cannot be installed is to be estimated. The present disclosure has been made to solve such a problem and an object thereof is to provide a position estimation system, a position estimation method, and a program that can estimate a position of a moving body without installing an installation object on the moving body. A position estimation system according to an embodiment includes: an imaging unit configured to photograph an imaging area including a target moving body; a detection unit configured to generate a first mask image, a mask area masking the moving body in the image photographed by the imaging unit being added to the first mask image; a perspective transformation unit configured to perform perspective transformation on the first mask image; and a position calculation unit configured to set as a first coordinate point a specified vertex of a first circumscribed rectangle set in the mask area in the first mask image, set as a second coordinate point a vertex indicating the same position as that of the first coordinate point among vertices of a second circumscribed rectangle set in the mask area in a second mask image obtained by performing the perspective transformation on the first mask image, and correct the first coordinate point using the second coordinate point to calculate a third coordinate point indicating a position of the moving body in the image coordinate system. A position estimation method according to the embodiment includes: photographing, by an imaging unit, an imaging area including a target moving body; generating a first mask image, a mask area masking the moving body in the image photographed by the imaging unit being added to the first mask image; performing perspective transformation on the first mask image; and setting as a first coordinate point a specified vertex of a first circumscribed rectangle set in the mask area in the first mask image, setting as a second coordinate point a vertex indicating the same position as that of the first coordinate point among vertices of a second circumscribed rectangle set in the mask area in a second mask image obtained by performing the perspective transformation on the first mask image, and correcting the first coordinate point using the second coordinate point to calculate a third coordinate point indicating a position of the moving body in the image coordinate system. A program according to the embodiment causes a computer to execute processing of: photographing, by an imaging unit, an imaging area including a target moving body; generating a first mask image, a mask area masking the moving body in the image photographed by the imaging unit being added to the first mask image; performing perspective transformation on the first mask image: and setting as a first coordinate point a specified vertex of a first circumscribed rectangle set in the mask area in the first mask image, setting as a second coordinate point a vertex indicating the same position as that of the first coordinate point among vertices of a second circumscribed rectangle set in the mask area in a second mask image obtained by performing the perspective transformation on the first mask image, and correct the first coordinate point using the second coordinate point to calculate a third coordinate point indicating a