Search

JP-7856307-B2 - Self-localization method, self-localization device, and program

JP7856307B2JP 7856307 B2JP7856307 B2JP 7856307B2JP-7856307-B2

Inventors

  • 柳瀬 龍
  • 平野 大智
  • 米陀 佳祐
  • 菅沼 直樹

Assignees

  • 株式会社ムービーズ

Dates

Publication Date
20260511
Application Date
20220804

Claims (9)

  1. A self-position estimation method for estimating the position on a map of an observation body equipped with an observation unit, A calculation step of calculating the ratio of the area of the landmark to reference map information indicating the location of the road landmark in a predetermined area stored in the memory unit, and the ratio of the area of the landmark to observation information including the road information and acquired by the observation unit, A calculation step of calculating a matching degree that represents the degree to which the area indicating the landmark included in the reference map image and the area indicating the landmark included in the observation information coincide, based on the two aforementioned ratios, Includes an estimation step of estimating the position of the observed object based on the degree of matching, Self-localization method.
  2. If the estimation step is successful, the following steps are included: an adoption step in which the position of the reference map image is used to estimate the position of the observed object on the map; The self-localization method according to claim 1.
  3. In the calculation step, a first ratio is calculated, which is the ratio of the area showing the landmark included in the reference map image that shows the location of the road landmark in the predetermined area, and a second ratio is calculated, which is the ratio of the area showing the landmark included in the observation information in the area corresponding to the predetermined area. In the calculation step described above, the ratio of the first ratio to the second ratio is calculated, In the estimation step, if the ratio of the first ratio to the second ratio is within a predetermined threshold range, it is determined to be a pass. The self-localization method according to claim 2.
  4. In the calculation step, the reference map image showing the location of the landmark is generated from the image acquired by the imaging unit of the observation device. The self-localization method according to claim 1 or 2.
  5. In the calculation step described above, the reference map image is generated using deep learning. The self-localization method according to claim 4.
  6. In the calculation step, the reference map image in the predetermined region and the observation information in the region corresponding to the predetermined region are further divided into a plurality of smaller sub-regions, and for each of the plurality of sub-regions, the reference map image and the observation information are identified from the correlation between the reference map image and the observation information, and the calculation step is performed. The self-localization method according to claim 1 or 2.
  7. In the estimation step described above, when the position of the observed object is estimated, the reference map image is updated using only the information of the region indicating the landmark. The self-localization method according to claim 1 or 2.
  8. A program for causing a computer to execute the self-localization method described in claim 1.
  9. A self-position estimation device that estimates the position of an observation body equipped with an observation unit on a map, A calculation unit calculates the ratio of the area of the landmark to reference map information indicating the location of road landmarks in a predetermined area stored in the memory unit, and the ratio of the area of the landmark to observation information acquired by the observation unit. A calculation unit calculates a matching degree that represents the degree to which the area indicating the landmark included in the reference map image and the area indicating the landmark included in the observation information coincide, based on the two aforementioned ratios. The system includes an estimation unit that estimates the position of the observed object based on the calculated matching degree, Self-location estimation device.

Description

This invention relates to a self-localization method, a self-localization device, and a program. In autonomous driving and other applications, technologies for estimating the self-position of a moving vehicle have been proposed. Patent Document 1 discloses a method for calibrating the external parameters of an onboard sensor, which includes the step of determining mapping parameters based on a first feature point acquired by LiDAR (Light Detection and Ranging) and a second feature point in a map database, and determining that the accuracy check for the first feature point was successful if the difference between the first and second feature points is within a predetermined value. International Publication No. 2019/007263 Figure 1 is a block diagram showing the configuration of the self-position estimation device in the embodiment.Figure 2 is a flowchart illustrating the general operation of the self-localization method in the embodiment.Figure 3 shows an example of calculating a correlation between a generated map image and a reference map image.Figure 4 shows an example of calculating the ratio from a generated map image and a reference map image.Figure 5 is a graph showing the ratio of the first ratio to the second ratio.Figure 6 is a flowchart showing the operation of rejecting the self-localization result when the matching degree of the self-localization estimation method in the embodiment is low.Figure 7 is a flowchart illustrating the detailed operation of the self-position estimation device in the embodiment. (Embodiment) In this embodiment, a self-position estimation method and self-position estimation device, etc., that can perform self-position estimation more accurately than conventional methods, even when the road conditions have been altered due to snow accumulation or the like, will be described. [composition] First, the configuration of the self-position estimation device in the embodiment will be described. Figure 1 is a block diagram showing the configuration of the self-position estimation device 1 in the embodiment. The self-position estimation device 1 comprises a control unit 10 and a storage unit 20. The control unit 10 comprises a calculation unit 30, a calculation unit 31, an estimation unit 32, a rejection unit 33, and an adoption unit 34. The self-position estimation device 1 is configured to communicate with the observation body 80 through a communication unit (not shown in the figure), etc. The self-position estimation device 1 estimates the location of the observation object 80 on the map by comparing a generated map image, which is generated based on observation information acquired by the observation unit of the observation object 80, with a reference map image, which is map information stored in the memory unit 20. Here, the observation unit 80 is equipped with a sensor as an observation unit, and acquires observation information, including road information, using this sensor. Examples of sensors include a camera as an imaging unit and a LiDAR as a distance detection unit. The observation information includes image information, video information, and distance information between the observation unit 80 and the object being observed. Road information includes at least landmarks that serve as road markers. While white lines on the road are the most preferred landmarks, they are not limited to these; guardrails, road curbs, or signs may also be used. Furthermore, the landmarks may be a combination of at least two selected from white lines, guardrails, curbs, and signs. Road information may also include the shape and width of the road. The generated map image is an orthorectified image of a bird's-eye view map generated in real time based on observational information acquired by the sensors of the observation device 80. Alternatively, the bird's-eye view map image before orthorectification may be used as the generated map image. The generated map image is a concrete example of the observational information. The reference map image also includes at least road-related landmarks, similar to the above. The reference map image may be pre-stored in the storage unit 20, or it may be acquired from an external source and stored in the storage unit 20. The reference map image is three-dimensional geospatial information of the road and its surroundings. For example, the reference image may be observational information including road information previously acquired by sensors, or observational information including road information created separately in advance, or a dynamic map created for autonomous driving. Furthermore, the reference image may be high-precision three-dimensional geospatial information (basic map information) that allows the vehicle's position on the road and its surroundings to be identified at the lane level. The reference map image may also include various additional map information necessary to support autonomous driving, etc., on top of the high-precision three-dimens