EP-4742158-A2 - THREE-DIMENSIONAL POINT GROUP DATA GENERATION METHOD, POSITION ESTIMATION METHOD, THREE-DIMENSIONAL POINT GROUP DATA GENERATION DEVICE, AND POSITION ESTIMATION DEVICE
Abstract
A three-dimensional point cloud generation method for generating a three-dimensional point cloud including one or more three-dimensional points includes: obtaining (i) a two-dimensional image obtained by imaging a three-dimensional object using a camera and (ii) a first three-dimensional point cloud obtained by sensing the three-dimensional object using a distance sensor (S11, S12); detecting, from the two-dimensional image, one or more attribute values of the two-dimensional image that are associated with a position in the two-dimensional image; and generating a second three-dimensional point cloud including one or more second three-dimensional points each having an attribute value, by performing, for each of the one or more attribute values detected, (i) identifying, from a plurality of three-dimensional points forming the first three-dimensional point cloud, one or more first three-dimensional points to which the position of the attribute value corresponds (S16), and (ii) appending the attribute value to the one or more first three-dimensional points identified (S17).
Inventors
- LASANG, PONGSAK
- WANG, CHI
- WU, ZHENG
- SHEN, SHENG MEI
- SUGIO, TOSHIYASU
- KOYAMA, TATSUYA
Assignees
- Panasonic Intellectual Property Corporation of America
Dates
- Publication Date
- 20260513
- Application Date
- 20181116
Claims (15)
- An apparatus comprising a processor, wherein the processor is configured to: obtain first point cloud data including, for each of a plurality of points, three-dimensional coordinates, one or more attribute values, and a degree of importance; receive, from a client device, a threshold value relating to the degree of importance; select, from the plurality of points, points whose degree of importance exceeds the threshold value; and transmit second point cloud data including the selected points to the client device.
- The apparatus according to claim 1, wherein the processor is configured to calculate the degree of importance based on the one or more attribute values.
- The apparatus according to claim 1, wherein the processor is configured to calculate the degree of importance based on a total number of two-dimensional images in which a current point is visible.
- The apparatus according to claim 3, wherein the processor is configured to output a higher value for the degree of importance with an increase in the total number of two-dimensional images in which the current point is visible.
- The apparatus according to claim 1, wherein the processor is configured to calculate the degree of importance based on at least one of: (i) a matching error between feature points of two-dimensional images; and (ii) a matching error between a three-dimensional map and the feature points.
- The apparatus according to claim 1, wherein the threshold value is set in accordance with specifications of the client device.
- The apparatus according to claim 6, wherein the threshold value is set higher with an increase in information processing capacity and/or detection capacity of the client device.
- The apparatus according to claim 1, wherein the one or more attribute values include a first feature quantity of each point of the plurality of points calculated from first image data obtained from an image sensor.
- The apparatus according to claim 8, wherein the one or more attribute values further include a second feature quantity of each point of the plurality of points calculated from second image data obtained from the image sensor.
- The apparatus according to claim 8, wherein for each point of the plurality of points, the one or more attribute values include feature quantities of each of N two-dimensional images in which the point is visible.
- The apparatus according to claim 8, wherein the feature quantity includes ORB, SIFT, or DAISY.
- The apparatus according to claim 8, wherein the processor is configured to combine feature quantities having similar values.
- The apparatus according to claim 8, wherein when the number of feature quantities assigned to a point exceeds an upper limit, the processor is configured to select N highest feature quantities using values of the feature quantities.
- The apparatus according to claim 1, wherein the processor is configured to sort the plurality of points according to the degree of importance from high to low before selecting the selected points.
- The apparatus according to claim 1, wherein the processor is configured to encode the selected points and transmit an encoded stream including the selected points.
Description
TECHNICAL FIELD The present disclosure relates to a three-dimensional point cloud generation method, a position estimation method, a three-dimensional point cloud generation device, and a position estimation device. BACKGROUND ART Patent Literature (PTL) 1 discloses a method for transferring three-dimensional shape data. In PTL 1, three-dimensional shape data is, for example, sent out to a network per element, such as a polygon or a voxel. At a receiver side, this three-dimensional shape data is collected, and an image is developed and displayed per received element. Citation List Patent Literature PTL 1: Japanese Unexamined Patent Application Publication No. 9-237354 SUMMARY OF THE INVENTION TECHNICAL PROBLEM However, PTL 1 was considered to require further improvement. SOLUTIONS TO PROBLEM In order to achieve the above object, a three-dimensional point cloud generation method for generating, using a processor, a three-dimensional point cloud including a plurality of three-dimensional points according to an aspect of the present disclosure includes: obtaining (i) a two-dimensional image obtained by imaging a three-dimensional object using a camera and (ii) a first three-dimensional point cloud obtained by sensing the three-dimensional object using a distance sensor; detecting, from the two-dimensional image in the obtaining, one or more attribute values of the two-dimensional image that are associated with a position in the two-dimensional image; and generating a second three-dimensional point cloud including one or more second three-dimensional points each having an attribute value, by performing, for each of the one or more attribute values detected, (i) identifying, from a plurality of three-dimensional points forming the first three-dimensional point cloud, one or more first three-dimensional points to which the position of the attribute value corresponds, and (ii) appending the attribute value to the one or more first three-dimensional points identified. A position estimation method for estimating, using a processor, a current position of a moving body according to an aspect of the present disclosure includes: obtaining a three-dimensional point cloud including a plurality of three-dimensional points to each of which a first attribute value is appended in advance, the first attribute value being an attribute value of a first two-dimensional image that is an imaged three-dimensional object; obtaining a second two-dimensional image of a surrounding area of the moving body that has been imaged by a camera included in the moving body; detecting, from the second two-dimensional image obtained, one or more second attribute values of the second two-dimensional image corresponding to a position in the second two-dimensional image; for each of the one or more second attribute values detected, generating one or more combinations formed by the second attribute value and one or more fifth three-dimensional points, by identifying the one or more fifth three-dimensional points associated with the second attribute value from the plurality of three-dimensional points; obtaining, from a memory device, a position and an orientation of the camera with respect to the moving body; and calculating a position and an orientation of the moving body using the one or more combinations generated, and the position and the orientation of the camera obtained. Note that these general and specific aspects may be implemented as a system, a method, an integrated circuit, a computer program, or may be implemented as a computer-readable recording medium such as a CD-ROM, or as any combination of a system, a method, an integrated circuit, a computer program, or a computer-readable recording medium. ADVANTAGEOUS EFFECT OF INVENTION The present disclosure makes further improvement possible. BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a diagram showing an outline of a position estimation system.FIG. 2 is a block diagram showing an example of a functionality structure of the position estimation system.FIG. 3 is a block diagram showing an example of a functionality structure of a vehicle serving as a client device.FIG. 4 is a sequence diagram showing an example of an operation of the position estimation system.FIG. 5 is a diagram for describing the operation of the position estimation system.FIG. 6 is a block diagram showing an example of a functionality structure of a mapping unit.FIG. 7 is a flowchart of an example of a mapping process in detail.FIG. 8 is a flowchart of an example of an operation of a three-dimensional point cloud generation device according to Variation 1.FIG. 9 is a flowchart of an example of a calculation process of a degree of importance in detail.FIG. 10 is a diagram showing an example of a data structure of a third three-dimensional point.FIG. 11 is a block diagram showing an example of a functionality structure of an encoder.FIG. 12 is a flowchart of an example of an encoding process in detail. DESCRIPTION OF EXEMPLARY EM